I don’t really understand what problem this is trying to solve and how the solution is better than std::function. (I understand the issue with the crash reports and lambdas being anonymous classes but not sure how the solution improved on this or how std::function has this problem?)
I haven’t used windows in a long time but back in the day I remember installing SumatraPDF to my Pentium 3 system running windows XP and that shit rocked
I think none of these points are demonstrated in the post hence I fail to visualize it
Also I copy pasted the code from the post and I got this:
test.cpp:70:14: error: assigning to 'void ' from 'func0Ptr' (aka 'void ()(void *)') converts between void pointer and function pointer
70 | res.fn = (func0Ptr)fn;
> test.cpp:70:14: error: assigning to 'void ' from 'func0Ptr' (aka 'void ()(void *)') converts between void pointer and function pointer 70 | res.fn = (func0Ptr)fn;
This warning is stupid. It's part of the "we reserve the right to change the size of function pointers some day so that we can haz closures, so you can't assume that function pointers and data pointers are the same size m'kay?" silliness. And it is silly: because the C and C++ committees will never be able to change the size of function pointers, not backwards-compatibly. It's not that I don't wish they could. It's that they can't.
Note that conversion from a void * pointer to a function
pointer as in:
fptr = (int (*)(int))dlsym(handle, "my_function");
is not defined by the ISO C standard. This standard
requires this conversion to work correctly on conforming
implementations.
`res.fn` is of type `void *`, so that's what the code should be casting to. Casting to `func0Ptr` there seems to just be a mistake. Some compilers may allow the resulting function pointer to then implicitly convert to `void *`, but it's not valid in standard C++, hence the error.
Separately from that, if you enable -Wpedantic, you can get a warning for conversions between function and data pointers even if they do use an explicit cast, but that's not the default.
You can't just keep claiming these things without providing evidence. How much faster? How much smaller? These claims are meaningless without numbers to back it up.
Your Func thing is better than std::function the same way a hammer is better than a drill press... ie it's not better because it's not the same thing at all. Yes the hammer can do some of the same things, at a lower complexity, but it can't do all the same things.
What I'm trying to say is being better than x means you can do all the same things as x better. Your thing is not better, it is just different.
This is a valid point missed by many today. The mantra of don't optimise early is often used as an excuse to not optimise at all, and so you end up with a lot of minor choices scattered throughout the code with all suck a tiny bit of performance out of the system. Fixing any of these is also considered to be worthless, as the improvement from any one change is miniscule. But added up, they become noticeable.
> Is it because I made hundreds decisions like that? Yes.
Proof needed. Perhaps your overall program is designed to be fast and avoid silly bottlenecks, and these "hundred decisions" didn't really matter at all.
But do you have actual proof for your first claim? Isn't it possible that the "constant vigilance" is optimizing that ~10% that doesn't really matter in the end?
For example C++ can shoehorn you to a style of programming where 50% of time is spent in allocations and deallocations if your code is otherwise optimal.
The only way to get that back is not to use stl containers in ”typical patterns” but to write your own containers up to a point.
If you didn’t do that, youd see in the profiler that heap operations take 50% of time but there is no obvious hotspot.
Yours is smaller (in terms of sizeof), because std::function employs small-buffer optimization (SBO). That is if the user data fits into a specific size, then it's stored inline the std::function, instead of getting heap allocated. Yours need heap allocation for the ones that take data.
Whether yours win or lose on using less memory heavily depends on your typical closure sizes.
It's a daily thing we all do: decide if this problem is better solved by a big chunk of code that is probably well tested but probably satisfies a bunch of requirements and other constraints or a smaller chunk of code that I can write or vendor in and has other advantages or maybe I just prefer how its spelled. Sometimes there's a "right" answer, e.g. you should generally link in your TLS implantation unless you're a professional TLS pereon, but usually its a judgement call, and the aggregate of all those micro-decisions are a component of the intangible ideal of "good taste" (also somewhat subjective but most agree on the concept of an ideal).
In this instance the maintainer of a useful piece of software has made a choice that's a little less common in C++ (totally standard practice in C) and it seems fine, its on the bubble, I probably default the other way, but std::function is complex and there are platforms where that kind of machine economy is a real consideration, so why not?
In a zillion contributor project I'd be a little more skeptical of the call, but even on massive projects like the Linux kernel they make decisions about the house style that seem unorthodox to outsiders and they have their reasons for doing so. I misplaced the link but a kernel maintainer raised grep-friendliness as a reason he didn't want a patch. At first I was like, nah you're not saying the real reason, but I looked a little and indeed, the new stuff would be harder to navigate without a super well-configured LSP.
Longtime maintainers have reasons they do things a certain way, and the real test is the result. In this instance (and in most) I think the maintainer seems to know what's best for their project.
I guess the point is that the articule does not prove what he did is better in any of the ways he claimed except for the “I understand it” part
Making changes like this claiming it will result in faster code or more smaller code without any test or comparison before vs after seems to be not the best way of engineering something
I think this is why the thread has seen a lot of push back overall
Maybe the claims are true or maybe they are not - we cannot really say based on the article (though I’m guessing not really)
Yeah, it seems unlikely that the typical target machine would have either a word or cache line size that spilled a std::function via overhead on a realistic closure, but who knows, I would bet real money either way without a profile.
And I think it is less than ideal as concerns the fragile abd nascent revival of mainstream C++ to have this sort of a gang tackle over a nitpick like this. The approach is clearly fine because its how most every C program works.
The memes of C++ as too hard for the typical programmer and C++ programmers as pedantic know-it-all types are mostly undeserved, but threads like this I think reinforce those negative stereotypes.
The real S-Tier C++ people who are leading the charge on getting C++ back in the mindshare game (~ Herb Sutter's crew) are actively fighting both memes and I think it behooves all of us who want the ecosystem to thrive should follow their lead.
The danger of C++ becoming unimportant in the next five or ten years is zero, C and C++ are what the world runs on in important ways.
But in 20? 30? The top people are working with an urgency I haven't seen in decades and the work speaks for itself: 23 and 26 are coming together "chef's kiss" as Opus would say.
The world is a richer place with Rust and Zig in it, but it would be a poorer place with C++ gone, and that's been the long term trend until very recently.
If the post was about rust and it was using unsafe code and casting function pointers then everyone would quickly jump to try and correct it all the same
I haven’t used windows in a long time but back in the day I remember installing SumatraPDF to my Pentium 3 system running windows XP and that shit rocked