Federated learning (FL) is an emerging and privacy-preserving machine learning technique that is shown to be increasingly important in the digital age. The two challenging issues for FL are: (1) communication overhead between clients and the server, and (2) volatile distribution of training data such as class imbalance. The paper aims to tackle these two challenges with the proposal of a federated fuzzy learning algorithm (FFLA) that can be used for data-based construction of fuzzy classification models in a distributed setting. The proposed learning algorithm is fast and highly cheap in communication by requiring only two rounds of interplay between the server and clients. Moreover, FFLA is empowered with an an imbalance adaptation mechanism so that it remains robust against heterogeneous distributions of data and class imbalance. The efficacy of the proposed learning method has been verified by the simulation tests made on a set of balanced and imbalanced benchmark data sets.