One thing that distinguishes macOS here is that the mach kernel has the concept of “vouchers” which helps the scheduler understand logical calls across IPC boundaries. So if you have a high-priority (UserInitiated) process, and it makes an IPC call out to a daemon that is usually a low-priority background daemon, the high-priority process passes a voucher to the low-priority one, which allows the daemon’s ipc handling thread to run high-priority (and thus access P-cores) so long as it’s holding the voucher.
This lets Apple architect things as small, single-responsibility processes, but make their priority dynamic, such that they’re usually low-priority unless a foreground user process is blocked on their work. I’m not sure the Linux kernel has this.
That it actually quite simple and nifty. It reminds me of the 4 priorities RPC requests can have within the Google stack. 0 being if this fails it will result in a big fat error for the user to 3, we don’t care if this fails because we will run the analysis job again in a month or so.
IIRC in macOS you do need to pass the voucher, it isn’t inherited automatically. Linux has no knowledge of it, so first it has to be introduced as a concept and then apps have to start using it.
> Now, those 600 processes and 2000 threads are blasting thousands of log entries per second, with dozens of errors happening in unrecognizable daemons doing thrice-delegated work.
This is the kind of thing that makes me want to grab Craig Federighi by the scruff and rub his nose in it. Every event that’s scrolling by here, an engineer thought was a bad enough scenario to log it at Error level. There should be zero of these on a standard customer install. How many of these are legitimate bugs? Do they even know? (Hahaha, of course they don’t.)
Something about the invisibility of background daemons makes them like flypaper for really stupid, face-palm level bugs. Because approximately zero customers look at the console errors and the crash files, they’re just sort of invisible and tolerated. Nobody seems to give a damn at Apple any more.
You don't need them to be sent to Apple. And if errors in console get sent to Apple, it's surely filtered through a heavy suppression list. You can open the Errors and Faults view in Console on any Mac and see many errors and faults every second.
They could start attacking those common errors first, so that a typical Mac system has no regular errors or faults showing up. Then, you could start looking at errors which show up on weirdly configured end user systems, when you've gotten rid of all the noise.
But as long as every system produces tens of thousands of errors and faults every day, it's clear that nobody cares about fixing any of that.
I wouldn't call UBI a "game plan" so much as a thing people can point to so justify their actions to themselves. It helps you pretend you're not ruining people's lives, because you can point to UBI as the escape hatch that will let them continue to have an existence. It's not surprising that so many in the tech industry are proponents of UBI. Because it helps them sleep at night.
Never mind that UBI has never actually existed, it probably never will exist, and it's very, very likely that it won't even work.
People need to face the possibility that we will destroy people's way of life the way we're headed, and to not just wave their hands and pretend that UBI will solve everything.
(Edited to tone back the certainty in the language: I'm not actually sure whether AI will be a net positive or negative on most people's lives, but I just think it's dishonest to say "it's ok, UBI will save them.")
I'm only "in the tech industry" in the literal sense, not in the cultural sense. I work in academia, making programs for professors and students, and I think the stuff "the tech industry" is doing is as rotten as you appear to.
UBI has never existed because the level of production required to support it has only just started to exist. (It's possible that we're actually not quite there, but that's something we can only determine by trying it out—and if we're not, then I'm 100% confident we can get there with further refinement of existing processes.) If we have the political will to actually, genuinely do UBI—enough to support people's basic needs of food, clothing, shelter, and a little bit of buffer, without any kind of means testing or similar requirements—then it's very, very likely that it will work. All the pilot programs give very positive data.
I'm not pushing UBI because I think it's a fix to the problem of automation. I'm pushing UBI because I think it's the fulfillment of the promise of automation.
When I developed D, a major priority was string handling. I was inspired by Basic, which had very straightforward, natural strings. The goal was to be as good as Basic strings.
And it wasn't hard to achieve. The idea was to use length delimited strings rather than 0 terminated. This meant that slices of strings being strings is a superpower. No more did one have to constantly allocate memory for a slice, and then keep track of that memory.
Length-delimited also super speeded string manipulation. One no longer had to scan a string to find its length. This is a big deal for memory caching.
Static strings are length delimited too, but also have a 0 at the end, which makes it easy to pass string literals to C functions like printf. And, of course, you can append a 0 to a string anytime.
The C++ std::string is both very complicated mechanically and underspecified, which is why Raymond Chen's article about std::string has to explain three different types (one for each of the three popular C++ stdlib implementations) and still got some details wrong resulting in a cycle of corrections.
So that wouldn't really fit C very well and I'd suggest that Rust's String, which is essentially just Vec<u8> plus a promise that this is a UTF-8 encoded string, is closer.
I agree on the former two (std::string and smart pointers) because they can't be nicely implemented without some help from the language itself.
The latter two (hash maps and vectors), though, are just compound data types that can be built on top of standard C. All it would need is to agree on a new common library, more modern than the one designed in the 70s.
I think a vec is important for the same reason a string is… because being able to properly get the length, and standardized ways to push/pop from them that don’t require manual bounds checking and calls to realloc.
Hash maps are mostly only important because everyone ought to standardize on a way of hashing keys.
But I suppose they can both be “bring your own”… to me it’s more that these types are so fundamental and so “table stakes” that having one base implementation of them guaranteed by the language’s standard lib is important.
You can surely create a std::string-like type in C, call it "newstring", and write functions that accept and return newstrings, and re-implement the whole standard library to work with newstrings, from printf() onwards. But you'll never have the comfort of newstring literals. The nice syntax with quotes is tied to zero-terminated strings. Of course you can litter your code with preprocessor macros, but it's inelegant and brittle.
Because C wants to run on bare metal, an allocating type like C++ std::string (or Rust's String) isn't affordable for what you mean here.
I think you want the string slice reference type, what C++ called std::string_view and Rust calls &str. This type is just two facts about some text, where it is in memory and how long it is (or equivalently where it ends, storing the length is often in practice slightly faster in real machines so if you're making a new one do that)
In C++ this is maybe non-obvious because it took until 2020 for C++ to get this type - WG21 are crazy, but this is the type you actually want as a fundamental, not an allocating type like std::string.
Alternatively, if you're not yet ready to accept that all text should use UTF-8 encoding, -- and maybe C isn't ready for that yet - you don't want this type you just want byte slice references, Rust's &[u8] or C++ std::span<char>
Automatic memory accounting — construct/copy/destruct. You can't abstract these in C. You always have to call i_copied_the_string(&string) after copying the string and you always have to call the_string_is_out_of_scope_now(&string) just before it goes out of scope
For many string operations such as appending, inserting, overwriting etc. the memory management can be made automatic as well in C, and I think this is the main advantage. Just automatic free at scope end does not work (without extensions).
You can make strings (or bignums or matrices) more convenient than the C default but you can never make them as convenient as ints, while in C++ you can.
Yes, but I do not think this is a good thing. A programming language has to fulfill many requirements, and convenience for the programmer is not the most important.
Nit: please don’t push to my browser history every time I expand one of the sections… I had to press my browser’s back button a dozen or so times to get back out of your site.
You can also hold down the back button to get a menu of previous pages in order to skip multiple back button presses. (I still agree with your point and you might already know that. Maybe it helps someone.)
Playing music doesn’t require unlocking though, at least not from the Music app. If YouTube requires an unlock that’s actually a setting YouTube sets in their SiriKit configuration.
For reading messages, IIRC it depends on whether you have text notification previews enabled on the lock screen (they don’t document this anywhere that I can see.) The logic is that if you block people from seeing your texts from the lock screen without unlocking your device, Siri should be blocked from reading them too.
Edit: Nope, you’re right. I just enabled notification previews for Messages on the lock screen and Siri still requires an unlock. That’s a bug. One of many, many, many Siri bugs that just sort of pile up over time.
It’s so great when the files on the navigator pane aren’t sorted, and then if you right-click sort, it rewrites half your pbxproj file and you get merge conflicts everywhere. So then nobody sorts the files because they don’t want to deal with it. Why can’t the sorting be a view thing that’s independent of the contents of the project file? Who knows.
When I used it in a team, I had to write a build step that would fail the build if the pbxproj file wasn’t sorted. (Plus a custom target that would sort it for you.) It was the only way to make sure it never got unsorted in the first place.
Sudo’s networking functionality is infuriating too, because if my system’s DNS is broken, I get to wait 60 seconds for sudo to work, during which time I can’t even ctrl+c to cancel!
(It has to do with sudoers entries having a host field, since the file is designed to be deployed to multiple servers, which may each want differing sudoers rules. It’s truly 90s era software.)
Interesting. How did it work getting your photos off of iCloud? Does Apple give you a good way to get an archive of all of your photos? That is, the original quality photos, without manually downloading them individually? (I currently have 446 GB of photos in iCloud…)
Immich iOS app supports backing up photos directly from iCloud in original resolution, with the all EXIF data included. I had 230 GB of photos myself, and I left the phone on the charger overnight with the app running in the foreground and screen locking disabled. In the morning everything was imported.
Some people have instead set Photos app on a Mac to download original photos from the iCloud library and then moved the files directly into the server. I have not personally tried this method though.
> Immich iOS app supports backing up photos directly from iCloud in original resolution
wait that is just crazy!!! Dang my dad is going to flip out when I tell him about this. He's got like 1.5 TB of photos in iCloud and has been searching for a way to get them off. And we're so close to our family storage limit that he gets mad at me when I text him pictures hahaha
There is a community-supported CLI program called immich-go that directly supports reading in iCloud and Google takeout archives, as well as local directories. It works great, and has gobs of import options to set up albums and tags.
[ https://github.com/simulot/immich-go ]
That's the worst service I've ever seen. It asks you the size of each zip file and I said 50G at first. And I couldn't download it because the connection was so unstable. No way to resume it and every 20~30 mins, it failed in the middle. Chrome, firefox, safari were all the same. I tried from a GCE VM as well to see if that's my network problem but didn't help.
I had to request again with 2G and I was able to download files finally. But only one by one. And after download 3~5 files, I had to login again as their login expires so frequently.
I had to do that for days and the download got expired. Oh my god. I had to request it again. And you know what? Their file list wasn't deterministic. I had to download from the beginning. lol
I finally made it and I swear I will never use any cloud service from apple.
Same issue has been going on for me with just about any big download from apple serves. Could be icloud. Could be xcode. Doesn’t matter. It will randomly fail in the transfer and require manual intervention to restart. Been this way for years.
iCloud Photos Downloader isn’t user friendly or pretty, but I finally managed to rip my entire collection without having to install any apple software.
This lets Apple architect things as small, single-responsibility processes, but make their priority dynamic, such that they’re usually low-priority unless a foreground user process is blocked on their work. I’m not sure the Linux kernel has this.
reply