Completely generic with type inference, and compiles to optimal machine code. But it also suffers from the "latent type errors deferred to the user" that the article discusses. Calling it with a type that does not support adding 1 will raise an error for a non-existing "+" method:
addone1("0")
ERROR: MethodError: no method matching +(::String, ::Int64)
This is pretty understandable in this case, but we may prefer annotating the addone function so that it can only be called with types that support adding 1. Using the (very conservative) notion that only numbers can have 1 added to them, we could use Julia's type hierarchy to restrict the types for which addone may be called:
addone2(x::T) where T<:Number = x+1
Calling it with an incompatible type now properly points to the outer level:
addone2("0")
ERROR: MethodError: no method matching addone2(::String)
But this type of dispatch restrictions rely on a type hierarchy, causing similar problems as class hierarchies in OO languages. For example, the user might have defined their own type that supports addition but not multiplication. Using addone on this type makes sense, but the type shouldn't be a subtype of Number. Such more flexible dispatch scenarios can be achieved with traits. Although Julia currently doesn't have direct language support for traits, they can be implemented inside the language, with macros for syntactic sugar. With traits, the previous example becomes:
Still relatively straight-forward to write and read, and the trait-restricted addone has the same performance as the original one. For incompatible types, the error message still points to the outer level:
addone3("0")
ERROR: MethodError: no method matching addone3(::Type{SimpleTraits.Not{CanAddOne{String}}}, ::String)
Importantly, the user can extend the CanAddOne trait (with additional @traitimpl lines) to cover their own type, without being forced to make it a subtype of Number.
Right, with currying it can be even more terse. And getting the error at compile time is actually preferable.
My Haskell is very rusty, but IIRC it will deduce that the argument to add1 must be Num, because of the type constraint on (+):
(+) :: Num a => a -> a -> a
So if I'm not mistaken, your add1 is equivalent to my addone2, i.e. incorporating a type constraint based on the type hierarchy. I'm curious: what's the equivalent of my addone3 in Haskell? How would you define add1 so that it also works on a type T later defined by the user, with T just supporting addition but T not subtype of Num? Do you need to define your own addition function (acting as a proxy for (+) for Num types)?
I'm still pretty new to Haskell, but I think there are several ways. The equivalent to your addone3 would be some sort of Template Haskell incantation. Template Haskell gives you lisp-style hygienic macros that are expanded and type checked at compile time (except more complicated because of the more complicated semantics and AST).
Haskell also provides a ton of options for generic programming that provide similar power (still statically typed). GADTs (Generalized Algebraic Data Types) might be of use here. There are also existential types and type classes.
What you probably really want though is a dependently typed language like Idris or Agda. If you're not familiar, here's the one liner: Java generics are types that depend on types (like a List of Bools). Dependent types are types that can depend on values (like a List of exactly 3 Bools). So in theory you should be able to define the type you're looking for here, but I don't know what the concrete syntax would be.
The problem with dependently typed languages right now is that the surface languages and concepts tend to be hard for people to grok. Personally I think meta-programming facilities and DSLs will help with this. Ther are lots of languages heading in that direction.
I wrote myself a perfectly functioning multi-entry accounting system over last weekend in my own dialect of Lisp for my self-employment venture. It's stuffed full of real data spread over nine accounts and lets me see a ledger report over any time period at a glance: all the debits and credits with running balances and so on. I can easily obtain the information to do all taxes and whatnot. It generates beautiful HTML+CSS invoices with a nice SVG logo. It's nicely object-oriented with classes and methods for everything: ledger, account, transaction, increment, invoice. I overloaded a cluster of math functions in a separate package so they work over a money type. It has self-checks against accounting errors.
Working code poured out almost as fast as I can type.
If I had to think about some propeller-head academic dependent type nonsense, I'd still be writing it come March 2018.
> The problem with dependently typed languages right now is that the surface languages and concepts tend to be hard for people to grok.
Anything hard to grok is a regression in tooling.
Why would I want to grok something difficult, when I'm finding programming easy and don't have any issues with getting the desired behavior out of the machine.
All I want to be grokking is how the bits of the run-time representation in the machine are coming together to solve whatever is being solved.
I actually agree, for the most part. I'm not advocating for Haskell or any of the other languages I mentioned, per se. I've spent time with lots of languages and there are things I like and don't like about all of them. If you forced me to write down my top 10 favorite languages, there would undoubtedly be several Lisps near the top.
If you're programming a single computer the existing languages and tooling are good. What I'm interested in is distributed compute and compute over heterogenous architectures (CPUs, GPUs, FPGAs, microcontrollers, etc).
For simplicity, let's limit ourselves to the simple case of HTTP-based service-oriented systems (or microservices). The state of the art has us spending a lot of time considering serialization, protocols, communication failures, managing identities, making authorization decisions, routing, etc. Things that should be completely orthogonal to the problem we're trying to solve end up being tightly coupled with our business logic. This becomes exponentially harder to manage as you add languages to your stack.
There is a lot of research around solving these problems. Unfortunately, all of it starts with a formal system that is "declarative" (or at least "algebraic") and doesn't mesh well with existing popular languages and tooling. Personally, I think we need to come up with a solution that lets us keep the existing tools that are good for programming a single component, and use some of this newer technology to build a smarter platform / underlay. The bridge between the existing languages and this underlay would be DSLs. Or, more precisely, "abstract languages" that can be implemented as DSLs in a variety of programming languages and may or may not have their own concrete syntax (think SQL+ORMs).
Sort of vague, I know, but hopefully that kind of makes sense?
> The assumption is that the machine code of the C++ compiler is significantly faster than that emitted by a comparable dynamic language compiler. While this may hold true in general, it does not necessarily hold true with Lisp. Lisp is a programmable programming languages. If we are inclined to program it for speed, we can.
My Lisp-fu isn't as strong as my C++-fu so someone correct me if I'm wrong but isn't the GC an intrinsic part of Lisp? Do more modern Lisps allow you to mark value types so you can control memory access patterns(which is where the true speed of C/C++ comes from).
Also not all Lisps have a tracing GC, some variants had a RC with tracing GC for collecting cycles.
RAII like patterns can be achieved via the with-.... functions, or macros.
I don't know the actual performance of commercial Lisps like Allegro Common Lisp and LispWorks, but I imagine it is quite good, given that they stay in business.
On the other had, given the amount of money spent in C and C++ optimizers vs the lack of industry wide adoption of Lisp, probably still not as good as current leading C++ compilers.
This is the TXR Lisp interactive listener of TXR 172.
Use the :quit command or type Ctrl-D on empty line to exit.
1> (defstruct animal nil
(:fini (me) (put-line `@me says good-bye`)))
#<struct-type animal>
2> (progn (new animal) nil) ;; make animal without referencing from REPL
nil
3> (+ 2 2)
4
4> (sys:gc)
#S(animal) says good-bye
t
with-objects invokes finalizers explicitly, before objects become unreachable.
Also, what if a constructor throws? Let's derive animal to dog which bails at `new` time:
6> (defstruct dog animal
(:fini (me) (put-line `@me: woof woof`))
(:postinit (me) (error "refuse to construct")))
#<struct-type dog>
7> (new dog)
#S(dog): woof woof
#S(dog) says good-bye
** refuse to construct
** during evaluation at expr-6:3 of form (error "refuse to construct")
The object instantiation logic catches exceptions and invokes finalizers on a partially constructed object (in the proper order as you can see: derived, then base).
The with-* style function do not replace RAII except for the limited case of lexically scoped resources. Not all resources are scoped lexically -- e.g. file handles you store in a map. What C++ programmers typically see as the most important guarantee is that RAII is prompt freeing -- that is, as soon as said example map goes out of its scope (or is freed by its owner), then those file handles will also be freed immediately.
Neither the gc-based nor with-* based solutions handle that.
Even on C++ RAII requires a lexical scope at some point in the program.
Somewhere in the whole workflow there must be a class allocated in the stack.
You can have destructor like behavior in Lisp as well, just register a file handle cleanup action by giving a cleanup lambda when creating the map instance.
The function that removes entries from the map will call the provided lambda.
I'm sure I must be missing something, but can you give a concrete example?
Of course instances of classes (not "classes", as you say, classes are an entirely compile-time construct in C++) must be allocated, but I mean... there's still heap storage. You know, shared_ptr<T> and all that...
What am I missing?
> You can have destructor like behavior in Lisp as well, just register a file handle cleanup action by giving a cleanup lambda when creating the map instance.
Well, except you don't know exactly when the map is going to get cleaned up...? And it could get reused across various sections of the program... Do you see what I'm talking about?
EDIT: That, and if the map exists for the entire duration of the program, but it's really important that entries have prompt cleanup when removed... what then?
(Btw, I do understand that in the general case with weak_ptr, shared_ptr, unique_ptr, etc. that things get decidedly less deterministic[1], but RAII is pretty well defined by scope or referenced-to-scope.)
[1] Basically almost as unpredictable as a general purpose GC. I can't recall the paper title, but I'm sure there is a paper out there detailing this.
If you don't want the heap allocated map to leak, and you want that management to be automated, you need some smart pointer type to reference it; map itself doesn't help.
All RAII-driven management of heap resources is tied to some scope somewhere. Anything not tied to scoping has to be treated explicitly: try to calculate the lifetime and explicitly dispose of the resource.
"I don't know the actual performance of commercial Lisps like Allegro Common Lisp and LispWorks, but I imagine it is quite good, given that they stay in business."
It might be good or just OK on modern hardware. I imagine they stayed in business for their IDE's, libraries, commercial support, and decent compilers. Batteries included. Some success stories make me think performance is really good, though.
I found those while looking for one about a real-time implementation for telecom or something. Franz's success stories are mostly about doing complicated stuff easier. There was one that looked performance-critical:
The last time I checked, the naive Lisp will be shorter than the typical C++, but usually much slower. Sometimes you can annotate your code to death for performance, resulting in something longer than the C++ with more parens, but you still can't trust the compiler to get it right. Fortunately, thanks to DEFMACRO, Lisp lets you write your own optimizing compiler, so as long as your implementation supports your full instruction set, you can do that and achieve better-than-C performance by generating Fortran IV's ugly cousin.
> I don't know the actual performance of commercial Lisps like Allegro Common Lisp and LispWorks, but I imagine it is quite good, given that they stay in business.
Performance is not one easy number. Applications have different performance requirements.
In many benchmarks for typed or type inferred code SBCL tends to be slightly faster with less programming effort.
Fast can mean for example:
* fastest possible execution of optimized code
Then one might not care about code size, code safety, robustness, threading capability, interrupts, .
* fast execution of non-optimized robust, flexible, reflective, debuggable code.
Allegro CL and LispWorks integrate a lot of features. These features are provided over some amount of platforms, with only few restrictions. They provide relatively good performance of a native code compiler. Some robustness is needed for commercial applications, thus I would expect an advantage there.
Well, modern C++ discourages the programmer from doing manual memory management. There's a very strong push to use vector and string instead of arrays and C strings, references instead of pointers, or, if you really need them, smart pointers instead of raw pointers, etc. Any book or article on modern C++ will tell you to let the compiler and language run-time handle memory management. It's not quite GC, but it's similar.
On the other hand, the default in Lisp is always to let the compiler handle memory, but Common Lisp in particular gives the programmer a lot of flexibility and control over types, memory management, and other optimizations. It's been fighting the "Lisp is slow" stereotype for a long time, so there's been a lot of work done to optimize it and give the programmer optimization options.
For your specific question about memory access patterns, Common Lisp does allow some control over that via the "dynamic-extent" declaration: (declare (dynamic-extent variable-name)). It tells the compiler (or interpreter) that a variable in an inner scope (of a loop, for example) can be allocated once and the space reused each iteration instead of allocating fresh memory each time. It's not full blown C style control of memory, but it's similar and can have a big impact in some situations. The book "Common Lisp Recipes" has a section on it, and so does the hyperspec: http://www.lispworks.com/documentation/HyperSpec/Body/d_dyna...
The recently published "Common Lisp Recipes" is a great book that covers a ton of topics in this area. I think a lot of people would be surprised just how much control and flexibility is available in Common Lisp.
I wouldn't say Common Lisp is faster than C++ in general, and certainly not by default, but with a bit of work it's possible to get pretty close. Importantly, the optimized code will still look and feel like more or less idiomatic Lisp.
Yup, you don't want to be doing manual memory management until you've done the profiling and found your hot-paths.
However, the set of problems I can tackle is going to be bounded by the escape-hatches that are available to me. Much like Rust has `unsafe {}`, being able to drop down to the bare metal is an important tool to be used at the appropriate time.
Allocations are only half the story, you also want to control where those allocations are located so that your cache access patterns pull in the right chunks of memory.
That's what arrays are for. You can also do structure packing using implementation-specific tricks. SBCL's SB-ALIEN package has a nice interface for doing this: http://www.sbcl.org/manual/#Foreign-Types
A really cool example of SB-ALIEN in action is jiyunomegami's native type extensions for the Vacietis C-to-CL transpiler: https://github.com/jiyunomegami/Vacietis/commits/master (commits 5504079d63eb2745f6b8ef7ec1cf9bb3151994eb onward)
I'm not really an expert on high performance Lisp, but my impression is that the compilers can optimize things pretty well and that you can have a fairly low level of control over things like allocation patterns.
Edit: looks like I misread what you were saying. Yeah, if you want to pack two things next to each other in memory, that might be tricky in the language defined in the ansi standard. Although I think some implementations give you control over this sort of thing.
For example, the standard itself gives you some control over code generation via compiler macros and individual implementations often give you much more:
This is LispWorks 64bit from http://lispworks.com . It uses a generational GC, and each generation has several memory segments. You can tell LispWorks how much generations it should use, how large these segments should be and in which generation/segment to allocate.
CL-USER 12 > (room t)
> Generation 7: 33904936 (0x2055928)
Cons 4774352 (0x48D9D0)
Non-Pointer 3213752 (0x3109B8)
Other 8048192 (0x7ACE40)
Symbol 2881680 (0x2BF890)
Function 14569120 (0xDE4EA0)
Non-Pointer-Static 5528 (0x1598)
Mixed-Static 411600 (0x647D0)
Weak 712 (0x2C8)
-- Segments:
Cons 40C0038800 - 40C04C9000
Non-Pointer 40D0000800 - 40D0314000
Other 40E0038800 - 40E07E8000
Symbol 40F0038800 - 40F02FB000
Function 4100038800 - 4100E20000
Non-Pointer-Static 40B0000800 - 40B000D000
Mixed-Static 400004E800 - 40000C8000
Weak 4110038800 - 4110039000
================================
> Generation 6: 0 (0x0)
> Generation 5: 0 (0x0)
> Generation 4: 0 (0x0)
> Generation 3: 206544 (0x326D0)
Non-Pointer 69648 (0x11010)
Other 5040 (0x13B0)
Symbol 131856 (0x20310)
-- Segments:
Non-Pointer 4070000800 - 4071001000
Other 4170018800 - 4171019000
Symbol 4080018800 - 4080819000
================================
> Generation 2: 8768024 (0x85CA18)
Cons 1960288 (0x1DE960)
Non-Pointer 2531584 (0x26A100)
Other 2720024 (0x298118)
Symbol 5088 (0x13E0)
Function 700184 (0xAAF18)
Non-Pointer-Static 850152 (0xCF8E8)
Mixed-Static 64 (0x40)
Weak 640 (0x280)
-- Segments:
Cons 41B0010800 - 41B1011000
Non-Pointer 4180000800 - 4181001000
Other 41C0010800 - 41C1011000
Symbol 41D0010800 - 41D0811000
Function 41E0010800 - 41E0811000
Non-Pointer-Static 4040000800 - 4040136000
Mixed-Static 4090010800 - 4090013000
Weak 41F0010800 - 41F0051000
================================
> Generation 1: 732480 (0xB2D40)
Cons 160240 (0x271F0)
Non-Pointer 170344 (0x29968)
Other 398200 (0x61378)
Function 3248 (0xCB0)
Weak 448 (0x1C0)
-- Segments:
Cons 4200008800 - 4200301000
Non-Pointer 4210000800 - 4210411000
Other 4220008800 - 42203F1000
Symbol 4230008800 - 423001B000
Function 4240008800 - 424003B000
Weak 4250008800 - 4250009000
================================
> Generation 0: 1956832 (0x1DDBE0)
Cons 630224 (0x99DD0)
Non-Pointer 361232 (0x58310)
Other 923816 (0xE18A8)
Function 41560 (0xA258)
-- Segments:
Cons 4010000800 - 401040B000
Non-Pointer 4030000800 - 403041B000
Other 4020000800 - 402040B000
Symbol 4150000800 - 4150101000
Function 4060000800 - 4060101000
Weak 4190000800 - 4190041000
================================
Total allocated 45568816 (0x2B75330), total size 178085888 (0xA9D6000)
NIL
That confused me too, both clearly work unless there is some definition of work that excludes things which have earned many people many billions of dollars. Hoaxes don't create sustainable businesses.
If you're thinking of the recent boom in machine learning specifically, there are plenty of people in ML, even people making lots of money from it, who think the concept of "artificial intelligence" is a recurring hoax, or at best an overselling aimed at people who've read more sci-fi than science. Of course plenty of people think otherwise, too, but it's not a rare view within the field.
The quote is probably 20 years old now, at the time people were pretty disillusioned about AI and OO was a craze. Stepanov had other ideas. Here's more context:
The sbcl compiler (called python) even creates the typed methods by itself, so mostly the defgeneric line is enough.
The type hints for args and return types are purely optional, as the compiler figures it out by itself.
He is right that algorithms, methods, trump data structures, objects.
You always write methods with specializations on objects.
Not the other way round, classes with specific methods.
Very much a part of the standard. i.e. not specific to SBCL.
Think of defgeneric as the function signature and defmethod as the template specialization. Not sure why you say this is an error in CLOS. Looks fine to me.
That said, most implementations try to auto-infer the generic function metaobject when you use defmethod without defgeneric. SBCL raises a warning.
The DEFGENERIC line is wrong. It can't have a body like that. It should be something like
(defgeneric xplusone (x)
(:method (x) (+ 1 x)))
Also, generic functions are slower than regular functions (due to dynamic dispatch), so using them for type optimization would be rather counterproductive.
Unless I'm missing something, generic algorithms that work across types, and algos that work on type internals are just two separate things. And the latter probably still wants to be encapsulated in the type.
I don't see the relation to Lisp, if anything that quote about noticing the semigroup property of parallel fold algorithms speaks to Haskell or ML more than anything else.
A lot of early research into parallel algorithms and exploiting associativity for parallelism started at Thinking Machines, a Lisp supercomputer company. Hillis and Steele's 1986 paper in CACM is still one of the best introductions to the subject: http://cva.stanford.edu/classes/cs99s/papers/hillis-steele-d...
There was quite a lot research into parallel, concurrent and distributed Lisps beginning in the 80s. Thinking Machines with its SIMD computer was just one approach. At some point in time there was a lot of money available for that stuff, including custom hardware. In the US the DoD paid and you can bet that some military/intelligence applications were based on exotic multiprocessor machines running Lisp.
Even though it will probably only be used once. Simply because I don't know what types I am going to need yet... but I know it might need a few operators `+`, `-`, `*`, `/`. Once I figure that out, the code is ready to go and is as fast as anything hand written.
If somebody could ever write a great 'Haskell for C++ MetaProgrammers' book describing how you are supposed to understand binary layout, IO, and wtf those hundreds of operators mean... you would probably have a bunch of programmers saying "Oh, I guess I know Haskell".
It's also no surprise that a bunch of the STL algorithms structures were easily made parallel in C++17. A lot of developers using the STL correctly could basically change a few lines of code and switch their program from sequential to parallel.
As I was reading some of the illustrations, I began to wonder why anyone would prefer to use Lisp instead of Haskell. Can anyone mention a few advantages? Mostly, I find Haskell attractive because of the fantastic type system which allows me to write code that will fail when I'm writing or changing code. But I worry that I'm overlooking something because I can't understand why some people prefer Lisp.
Ever since i learned lisp i feel very annoyed by languages having so much syntax. Just yesterday i was looking at tiny piece of Haskell code and it gave me headaches. Once you realize how much more productive you can be without cognitive load of juggling dozens of syntactic constructs in your head - it's hard to go back. and then you have benefits of homoiconicity on top of that.
"Modern C++ has shifted focus from an emphasis on type (objects) which accommodate algorithm to an emphasis on algorithms parametrized over types."
That may be a bug, not a feature. The Boost crowd won the battle, making extremely complex templates an essential part of the language. But they may have lost the war, as C++ loses market share.
LISP backed into typing, and it shows. Both typed variables and objects are painful in LISP. By the time LISP got both, the era of LISP was over. LISP is really dead now; there hasn't been a release of GNU Common LISP ("clisp") in 7 years.
Allegro CL 10.0 released on 2015-10-05
ABCL 1.4.0 released on 2016-10-08
CCL 1.11 released on 2015-11-06
ECL 16.1.3 released on 2016-12-19
Lispworks 7.0 released on 2015-05-05
MKCL 1.1.10 released on 2017-01-18
SBCL 1.3.15 released 2017-02-28
I wish I was as dead as lisp. The thing is still malleable and performing into the top 10 languages despite being completely out of mainstream and big guns radar. Meanwhile most mainstream languages (c11, js, python, ...) have bended towards closures as a central paradigm. Not to mention advanced python talks that are mostly CLOS MOP, and humm perl6 which allows to hack as much as CL. Lisp as a product is dead, but the genetic line is still flowing; mostly because of its origin as proto AI recursive logic vehicle.
I recently came back to write a small experiment in C++. I programmed it professionally for a decade, ending in 2002, and was pretty good at it. After the last 16 years working in Java, Scala, Clojure, and Javascript, I have two observations on C++ 11:
a) Grateful for the extensive access to algorithms, lambdas, type inference;
b) Astounded at the complexity of template meta programming.
I imagine I'll get better at it with practice. But I'm operating at about 15% of the time efficiency of, say, Scala.
There is a simple trick to using complex template meta-programming techniques correctly: Leave it up to library authors.
It sounds tongue in cheek, but I am serious. In the std library, boost and other sophisticated libraries there are tons of template shenanigans and there is little to be gain for most application developers. The single thing complex template meta-programs buy you is compile time evaluation for things in the middle of algorithms and classes.
Which is useful, but usually for things like deciding how many object should be allocated at once or how big a working set is to keep everything in cache. These are usually the performance optimizations you care about after all the algorithmic ones have been dealt with and often save only a constant number of instructions. When this seemingly insane level of optimization makes sense the author of a library will often accept the needed data as a parameter.
With constexpr things like lookup tables can easily be computed at compile with much more sane code.
If that were true then every javascript coder would need to "understand" every javascript runtime. And every Ruby coder would need to understand C in case their libraries break. Every Java programmer would need to know the implementation of their JVM. Every ...
In practice when the tools we rely on break, we have other alternatives. We tend to start with web searches and asking questions on SO. We can try different versions or libraries. Most problems have workarounds that may or may not be applicable.
Sometimes deep knowledge helps and I will tend to advocate for it. In the case of template metaprogramming, deep knowledge gains you very little and a common breakage will add 2 extra instructions to your runtime execution. If that matters you are already watching for it. As soon as you use COM, RPC, a web API or any scripting language you have already done something that would blow this cost away for an abstraction.
Knowing enough to fix a problem doesn't always mean a deep understanding of every library and tool even in C and C++ land. There is simply too much for any 1 person or any dedicated team to know, we must rely on division of labor. That is why we find abstractions like COM, RPC, web APIs and scripting languages worthwhile, they allow someone else to handle some part of our cognitive load at the cost of runtime performance.
Why do you really need to do template metaprogramming?
Generally that is not required even in performance sensitive code. Perhaps a few conditionals a'la enable_if or direct use of SFINAE... but most everything else, not really.
Additionally: I needed to make a program to join two tables by a common subset of their columns. It's desirable in CUDA, to enhance caching locality, to work with structs of arrays rather than arrays of structs. So a pattern in Thrust is to work with tuples. You zip up your columns and then you can use them in transformations. That's nice enough. The trouble arises when you need to write code to join (and transform) table A having NA columns, by table B having NB columns, sharing some number of shared columns NC. To manipulate the tuples you end up specializing cases and you build some awful large switch statements. That is, if you're a naive rookie, like me. Then you back off, and try to learn a little meta programming. You actually can dynamically treat tuples, sort of like lists, a little bit. But finally I decided that too is a kind of madness, and I'm on my third try, this time using a custom iterator over good old-fashioned dynamic vectors of arrays, so I can sidestep tuples altogether, and still use the structure of arrays idiom.
Sooner or later you will likely be faced with having to understand code that was written by someone who had just learned about template metaprogramming and was determined to put his new-found knowledge to use.
that's gonna be the next guy who looks at this code i'm writing...
Edit: Clarification, I am not actually having myself to do original metaprogramming, but in order to diagnose problems implementing the custom iterators, I have needed to delve into, and understand, the meaning of the Thrust library code.
Oh -- and enable_if, and SFINAE -- these are terms I've learned only in the last three weeks, but yes, I would count having to learn even those concepts, to successfully use the language, as an obstacle and not for the faint of heart, right?
Generally speaking, you don't have to know them unless you're writing generic libraries in the vein of STL and Boost. If you're just using those, then SFINAE etc is the magic under the hood that makes it all "just work".
> there hasn't been a release of GNU Common LISP ("clisp") in 7 years.
CLISP had a beta release a few days ago.
Just installed it on my Mac: this is GNU CLISP 2.49.50 from 2017-03-19.
$ clisp
i i i i i i i ooooo o ooooooo ooooo ooooo
I I I I I I I 8 8 8 8 8 o 8 8
I \ `+' / I 8 8 8 8 8 8
\ `-+-' / 8 8 8 ooooo 8oooo
`-__|__-' 8 8 8 8 8
| 8 o 8 8 o 8 8
------+------ ooooo 8oooooo ooo8ooo ooooo 8
Welcome to GNU CLISP 2.49.50 (2017-03-19) <http://clisp.org/>
Copyright (c) Bruno Haible, Michael Stoll 1992, 1993
Copyright (c) Bruno Haible, Marcus Daniels 1994-1997
Copyright (c) Bruno Haible, Pierpaolo Bernardi, Sam Steingold 1998
Copyright (c) Bruno Haible, Sam Steingold 1999-2000
Copyright (c) Sam Steingold, Bruno Haible 2001-2010
Type :h and hit Enter for context help.
[1]> (lisp-implementation-version)
"2.49.50 (2017-03-19) (built 3699208367) (memory 3699208818)"
Other 'free/open source' implementations are currently more interesting, especially ECL, Clozure CL and SBCL.
I have on my 32bit ARM under Linux the following implementations: GCL, ECL, LispWorks, SBCL, CCL, ABCL. LispWorks, CCL and SBCL are native compilers. I'd say that's enough choice.
Two people are still making source checkins, but there hasn't been a new release since 2010. It's nice to know that someone is still working on it, but they're not making it to a release version.
Whoa, a whole entire two people! On just one project?
I've been toiling all alone since mid 2009 on a great language called TXR which includes a fantastic new Lisp dialect with many original features and ideas.
$ git log --oneline | wc
3643 45798 313606
Over 3600 commits since switching to git late October 2009, after version 18.
There has to be a law. Any time someone criticizes some FOSS project for not making a new release in seven years, that's an indicator that it just happened two or three days ago.