Functional programming languages are great. As a math major, I'm used to functions never having side effects. In a functional programming language, inputs and outputs are explicit–there's nothing "hidden" from caller. This minimizes "spooky action at a distance"–a function call here won't result in something blowing up there.

Most functional languages are "impure", in that they allow, though usually discourage, side effects. In Clojure, for example, you can set a dynamically bound variable, e.g. the output stream *out*, in one function and change the behavior of another function. There's a trade-off between purity and convenience–as Rich Hickey said, a programming language with no side effects is good for nothing more than heating up your CPU.

Functional languages seem to have gained some popularity lately. Newer languages like Clojure, Elm, Elixir, etc. have gotten a lot of the more "adventurous" programmers excited, and even older languages like Java have been adding features usually associated with functional programming, like lambdas and streams.

Minimizing side effects is particular important for parallel programming. Moore's law has not been applicable to processor speed for a number of years now, and its applicability to data density also seems to be reaching an end. If you want to get more out of your computer in the future, odds are you'll have to do it by taking advantage of multiple cores.

Parallel programming with mutable data structures, etc. is a nightmare even for experienced developers–it's just too hard to mentally follow what happens when multiple threads start interacting with the same object. What is the state of the object at any given instant in time? Unless very specific design paradigms are used, it's awfully hard to say. When things are immutable, as is usually the case in functional languages, dealing with multiple threads is much easier.

Given the advantages of functional languages, will they ever become the most popular type of programming language?

Unlikely. Functional languages have been around for a long time–since the first implementation of Lisp, in fact–and they've never been very popular. History shows that most programmers love side effects. Increment this, mutate that, do X while we're also doing Y because hey, why write two methods when you can write one? Adherents of the "Turing machine school" far outnumber the "lambda calculus school", and software is worse off for it.