There are some things that are actually quite difficult or simply impossible to express in Ruby, Python, and other mainstream OO languages. The promise of Lisp has always been (to me at least), powerful runtime dynamism without sacrificing performance, I wrote about this yesterday, http://dosync.posterous.com/51626638.
It's also quite difficult in my experience to explore and implement more sophisticated and powerful OO and FP concepts like predicate dispatch (think CLOS generic methods or ML pattern matching but more useful) or ad-hoc type systems in these languages - especially when your goal is to produce something that is not a toy. Or think about embedding an efficient logic programming DSL a la Paradigms of Artificial Intelligence.
Of course for various reasons, that this kind of flexibility and power can be applied to real software development might not interest you. But it is why I find Lisp compelling.
Turing completeness says absolute nothing about performance. If my embedded logic programming library takes 2ms to run a query and yours takes an hour - I can actually put mine to useful work.
Granted many languages simply move this kind of work to external C dependencies and such. However this doesn't address the other kinds of language extension such as idiomatic ad-hoc type systems and predicate dispatch I mentioned.
Why shouldn't our programming languages capture these heterogenous use cases comfortably?
Ah, yet another misapplication of the "Turing completeness" idea. Turing completeness is about results, not methods of computation. If two languages are said to be Turing complete, it means that they are able to compute exactly the same set of functions, and not that they do it in exactly the same way. For instance, you have a problem that in one language is solvable in O(n), but in the other it provably requires Omega(n lg n), because certain constructs (e.g., random-access arrays) are missing from it.
Kolmogrov complexity proves that while language A can emulate capability X from language B, it is impossible to do so without writing B in A. In which case, it is still B doing X.
So, in a literal sense, it can be impossible to do X in A and possible to do X in B, even though A and B are Turing complete.
Of course it's possible. The question is; how much work is it going to be? Will I have to use my turing complete language to implement Lisp to do it? In that case I would accept "impossible" as shorthand.
It's also quite difficult in my experience to explore and implement more sophisticated and powerful OO and FP concepts like predicate dispatch (think CLOS generic methods or ML pattern matching but more useful) or ad-hoc type systems in these languages - especially when your goal is to produce something that is not a toy. Or think about embedding an efficient logic programming DSL a la Paradigms of Artificial Intelligence.
Of course for various reasons, that this kind of flexibility and power can be applied to real software development might not interest you. But it is why I find Lisp compelling.