K-mean is probably most popular algorithm and most taught algorithms in academia. However it has got many limitation and listing some of them here:

- You need to specify value of k
- Can cluster non-clustered data
- Sensitive to scale
- Even on perfect data sets, it can get stuck in a local minimum
- Means are continuous
- Hidden assumption: SSE is worth minimizing
- K-means serves more as quantification

In Hierarchical clustering you don’t need to specify values of k, you can sample any level from the tree it build either by top down or bottom up approach. Such a tree is called Dendrogram.

Scikit also supports variety of clustering algorithms including DBSCAN and list which one suits when. http://scikit-learn.org/stable/modules/clustering.html

References:

https://stats.stackexchange.com/questions/133656/how-to-understand-the-drawbacks-of-k-means