I've always been fascinated with the process of converting thought into code, of mapping the idea for accomplishing some task to a set of instructions that is, ultimately, expressible to a computer. But, of course, I'm not alone in this; computer scientists have struggled with the same from the inception of computers and before: witness Ada Lovelace's "diagram of development" for Babbage's Analytical Engine.
Humans have been expressing thought verbally and, later, through writing, for millennia. In more recent times, mathematicians developed notation for expressing mathematical thought using symbols and logic statements. However, with computers, we have a new medium: expressing algorithmic thought in a form that can, ultimately, be interpreted by a machine.
In his famous 1936 paper, Turing demonstrated that the concept of "algorithm" is entirely captured by a Turing machine, i.e., a computer. He then proceeded to use this new encapsulation of algorithm to prove that the halting problem is undecidable. Church did the same shortly before Turing but used logic instead (the lambda calculus). I find Turing's approach more, well, approachable.
I'm interested in programming languages because they map thought to code for the machine, but, like human languages, there isn't just one that is best. Some do think otherwise. Over the years, I've had various developers ask me, "why doesn't everyone just use XXX? You don't need anything else." Here XXX some standard programming language like Java, C++, or Python (I've heard all three).
I disagree with such statements. Why? Because there isn't just one best way to think, nor should we expect there to be one best way to express thoughts. Consider mathematics. In particular, ancient mathematics. The Babylonians had place notation, in base 60, but no symbolic representation. Instead, they wrote every problem down in word form. However, because of their place notation, they were able to develop things like approximation techniques for square roots, the "Pythagorean" theorem, solving for the roots of quadratic equations, and even hints at trigonometry. The ancient Egyptians, however, lacked place notation and were restricted to simple unit fractions (plus one oddball symbol for 2/3). They understood some geometry but didn't progress in what we now call algebra. They did, however, stumble across the correct formula for the volume of a truncated pyramid.
My point is this: the limitations imposed by the Babylonians and Egyptians' notational methods restricted their ability to express concepts. Likewise, programming languages are the same; they have limits and constrain the possible set of algorithmic thoughts easily expressible in that language. The key word in the previous sentence is "easily" because, as Turing showed, all algorithms can be implemented by a Turing machine or anything equivalent, including all popular programming languages. So, in theory, all languages are sufficient (if they interface with hardware appropriately), but not all languages are convenient.
Nowhere is this better demonstrated than in the world of esoteric programming languages (esolangs). This realization was the motivation behind my book "Strange Code." Esolangs exist on the thought-as-code frontier. They push the limits, often, strangely, by imposing severe restrictions. Along the way, they force humans to reconsider what it means to express algorithmic thought. And, I argue, that's a good thing, even if you never use an esolang for anything practical. Go ahead, write an operating system in Malbolge. In theory, you could, but in practice, good luck.
The fun involved in designing a new programming language is sufficient motivation for the plethora of languages, utility be damned. However, there's more to it. Even if unconsciously, I suspect many language designers are really looking for novel ways to express their thoughts in a form realizable by a machine. I know I am.