It is a practice that leads, as a consequence, to insight. Insight is not information that might be read from a book, it is experience that uses observation to arrive at understanding and transformation. You can't just "decide to have" the experience without doing the work of transforming yourself through observation. People who have gone far in the practice do tend to say that there was never any goal to begin with, that they ended up where they started, but that's more of a metaphor than anything else. Someone who travels around the world and ends up where they started is in a very different place than someone who never left home.
Twenty minutes of searching didn't come up with any software jobs related to the Northwest Forest Plan. I'm PNW born and raised, and love old-growth forests more than pretty much anything. If I could find a SWE job doing something connected to this plan, I'd probably work for free^H^H^H^H^H^H be very grateful.
I also felt like the essay suffered a lot from generalization -- Harrison happened to talk to these three people, and extrapolated way too much from that.
Well, no, at the minimum he talked to all 25 students.[^1]
Additionally, why would 1, 3, or 25 matter?
The behavior Chris described requires attending the school and visiting it often enough to view multiple cohorts.
[^1] When I went into the classroom at the appointed hour, the 25 students were all there ready to interview me.
[^2] Man, I wish I went there if it means you could talk like this all the time. The older I get, the more suffocated I am by the damp blanket of adult communication. At 20 I would have said it was immoral and boring to withhold engaging deeply, at 35 I need a damn good reason to bother engaging rather than smiling and nodding.
That book mentions alpha-beta filters as sort of a younger sibling to full-blown Kalman filters. I recently had need of something like this at work, and started doing a bunch of reading. Eventually I realized that alpha-beta filters (and the whole Kalman family) is very focused on predicting the near future, whereas what I really needed was just a way to smooth historical data.
So I started reading in that direction, came across "double exponential smoothing" which seemed perfect for my use-case, and as I went into it I realized... it's just the alpha-beta filter again, but now with different names for all the variables :(
I can't help feeling like this entire neighborhood of math rests on a few common fundamental theories, but because different disciplines arrived at the same systems via different approaches, they end up sounding a little different and the commonality is obscured. Something about power series, Euler's number, gradient descent, filters, feedback systems, general system theory... it feels to me like there's a relatively small kernel of intuitive understanding at the heart of all that stuff, which could end up making glorious sense of a lot of mathematics if I could only grasp it.
Incidentally this is why people miss the mark when they get mad about mathematicians using single letter variable names. Short names let you focus on the structure of equations and relationships, which lets you more easily pattern match and say "wait, this is structurally the same as X other thing I already know but with different names". It's not about saving paper or making it easier to write (it is not easier to write Greek letters with super/subscripts in LaTeX using an English keyboard than it would be to use words). It is about transmitting a certain type of information to the reader that is otherwise very difficult to transmit.
While it uses letters so it looks vaguely like writing, math notation is very pictorial in nature. Long words would obscure the pictures.
I disagree. Single letter variables are meaningless.
In order to get the big picture, you have to remember
what all those meaningless letters stand for. Using
meaningful variables would make this easier.
If you work with them long enough it becomes second nature to read them, and then it is easier to manipulate and compose them. The rest of the context is the background knowledge to understand the pithy core equations. Papers are for explaining concepts, equations are for symbolic manipulation. Meaningful variable names would be middle ground and not good at either, except to help someone not familiar with the subject to understand the equation, but a lot of the symbols are so abstract that they really need to be explained in more detail elsewhere or would be arbitrarily named.
If you're in an abstract/general mathematical function, then sure: single letters. If you're doing more business logic kind of stuff (iterating through a list of db/orm objects or processing a request body) then the names should be longer
Often the actual meaning of the symbols is subordinate to the point you're trying to convey. e.g. I can tell you that `integrate(boundary(Region), form) = integrate(Region, differentiate(form))`, which is great and all, but I might write `<∂M|w> = <M|dw>` because what I'm trying to tell you is that you should think of these things as a dual-pairing of vector spaces (via integration) and that ∂ and d are somehow adjoint. They're both Stokes' theorem, but the emphasis is different, and in either case the hard part is the mountain of work it takes to define what the words even mean (limits, and integrals, and derivatives, and vectors, and covectors, and manifolds, and tangent spaces, and vector fields, and covector fields, and partitions of unity, and symmetric and alternating forms, and exterior derivatives, etc. etc. all so you can finally write one equation, which really just says that all the swirlies inside a region cancel out so if you want to add them all up, you can just add up the outer swirly).
The thing about math is you need to be comfortable viewing the same concept through a bunch of different lenses, and various notations are meant to help you do that by emphasizing different aspects of "the picture" you're looking at.
Ok, I can accept that. At the same time, my impression is that mathematicians always use single-letter variables.
It's like either they're not clear who their audience is or they're afraid to get off the beaten path. If they're explaining a classic algorithm, they use the common, single-letter variables instead of replacing them with meaningful names.
IMO your comment seems not to be addressing the point made in its parent comment. To make the point again with different words:
- Using long descriptive variable names would give them meaning, and make the particular equation/expression easier to understand or apply.
- Using short single-letter variable names allows you to forget the meaning of the variables and see the underlying structure, thus making the expression easier to connect to other situations (with completely unrelated meanings) that happen to have the same underlying structure. (The letters being meaningless, or at least not carrying their meaning so strongly, is a feature, not a bug.)
(Another way of seeing the distinction is whether you consider the equation to be the final result, to be used and applied, or as a starting point, to be manipulated further.)
You're looking for the theory of linear (or nonlinear) dynamical systems. Unfortunately it's not one kernel of intuition backed by consistent notation, it's many with no consistency. A good course on controls and signals/systems will beat those intuitions into you and you learn the math/parlance without getting attached to any one notational convention.
The real intuition is "everything is a filter." Everything else is about analysis and synthesis of that idea.
Maybe check out Probabilistic Robotics by Dieter Fox, Sebastian Thrun, and Wolfram Burgard. It has a coherent Bayesian formulation with consistent notation on many Kalman-related topics. Also with the rise of AI/ML, classic control theory ideas are being merged with reinforcement learning.
Thanks for the recommendation! It would never have occurred to me to look at robotics, but I can understand why that's very relevant.
I read Feedback Control for Computer Systems not too long ago, which felt like yet another restatement of the same ideas; I guess that counts as "classic control theory".
If Q and R are constant (as is usually the case), the gain quickly converges, such that the Kalman filter is just an exponential filter with a prediction step. For many people this is a lot easier to understand, and even matches how it is typically used, where Q and R are manually tuned until it “looks good” and never changed again. Moreover, there is just one gain to manually tune instead of multiple quantities Q and R.
Hey, I had very similar thoughts many years ago! The trick is yes, many filters boil down to alpha/beta, and the kalman filter is (edit: can be) really a way to generate those constants given a (linear) model (set of equations describing the dynamics, ie the future states) and good knowledge of the noise (variance) in the measurements. So if the measurements always have the same noise it will just reduce the constants over time, and it is only really useful when the measurement accuracy can be determined well and also changes a lot.
Interesting. Are you characterizing Kalman filters mostly as systems of control/refinement on top of alpha-beta filters?
I do feel like the core of it is essentially exponential/logarithmic growth/decay, with the option to layer multiple higher-order growth/decay series on top of one another. Maybe that's the gist...
When you start dealing with linear systems and disturbances, you end up with basically matrix math and covariance in some form and way.
The thing about Kalman filter is that its a pretty well known and exists in many software packages (just like PID) so its fairly easy to implement. But because noise is often not gaussian, and systems are often not linear, its more of a "works well enough" for most applications.
I attended Deep Springs 1996/97. The school goes through semi-regular cultural oscillations between "mean" and "nice"; between what we'd now call toxic masculinity, and sort of a peace-and-love hippie friendliness. Students play a large role in admitting the incoming class, and tend to admit people like them, until the culture swings too far in one direction and they start correcting.
It sounds like this guy visited during a "mean" period, which is too bad. I attended during an upswing into a "nice" period, and it felt well balanced. My application interview was one of the most memorable experiences of my life -- I'd never had anyone pay that kind of close attention to anything I'd written, or what I thought. It woke me all the way up, in a sense where I'd gone through most of my teenage years asleep, and was enormously bracing. When they finally let me out, I emerged into the main room, where some guy reading on a sofa looked up and asked, "How was it?" I don't remember exactly what I said, but it communicated something along the lines of "holy shit that was a thrill!". I still suspect he communicated my attitude back to the applications committee and that played a part in getting accepted.
So far as I know, no one during my two years visited the Cottontail Ranch :)
I am 100% in on OSMand and the whole ecosystem, but I still curse out loud every time I have to enter a street address into its address "parser". I know it's a hard problem, but it's horrible. None of the app's other shortcomings are meaningful to me.
Didn't OSMand do something strange to guess addresses instead of using reverse geocoding? I seem to remember that there were plenty of addresses that are actually in OSM and Nominatim has no trouble finding that OSMand cannot find or places in wildly different places.
Addresses in OSM are in expanded format eg 100 south 35th street. What you’re entering is likely 100 s 35th st. Osmand looks for exact string matches so it won’t find the address.
Addresses in OSM are divided into their constituent parts, so you have separate house number, street (or place), city, suburb, country, etc.
Of course, you still need country-specific code to account for all the various abbreviations, e.g. Str. in German, or the cardinal directions and road type abbreviations common in the US (blvd, hwy, dr, etc. ... I've recently fixed a bunch of those in OSM and it's quite a list). Well, and checking alt_name, local names and names in other languages in OSM as well.
This is incredibly helpful, thank you! It's going to save me hours of research.
I have a Das Keyboard that I am very fond of, but over the years have definitely started to wonder why a) the rows are staggered, and b) it isn't split. There's just so little reason to stick to this form-factor.
As I teacher, types.SimpleNamespace is wonderful for helping people bridge between their understanding of dicts (a fundamental) to classes/instances (a little less fundamental).
However, I almost never use SimpleNamespace in real code. Dicts offer extra capabilities: get, pop, popitem, clear, copy, keys, values, items). Classes/instances offer extra capabilities: unique and shared keys in distinct namespaces, instances knowing their own type, and an elegant transformation of method calls ``a.m(b, c)`` --> ``type(a).m(a, b, c)``. Dataclasses and named tuples have their advantages as well. In practice, it is almost never that case that SimpleNamespace beats one of the alternatives.
All of my passwords are in the "pass" command line utility, where they're encrypted with gpg. I added my brother's gpg key as an encryption target, and his ssh key onto the sever where the git repo is stored, locked down to the git shell command. In the event of my untimely demise, my wife tells him the url of the git repo.
I personally wouldn’t go to the extent of using CLI tools, as my next of kins and family members aren’t at all technical. A printout of my 1Password emergency kit in a safe deposit box is probably doable, but then what - 598 passwords to projects on an old git repo on an ancient Synology NAS, or a throwaway account for some random website?
There is probably a lot to be said to curate your accounts to assist those sifting through your estate.
The ability to pass your information legacy is important, and complicated. The trope of your mother going through their mother’s papers and finding a long lost love letter - or an unfinished manuscript - is equally plausible today. What secrets lurk in your DMs, Messenger and Signal history? Does your draft blog post actually contain some amazingly insightful observation?
Maybe your family’s memory of you could be enriched with this information? …maybe not?
At the end of (your) day(s), you might take those secrets to your grave, and it’s unlikely that your tombstone will include your GUID, or the Glacier storage URI where your online self will remain until the TOS states otherwise.
REST In Blob
EDIT: RAM-mento Moar-i(sorry, got carried away.. couldn’t help myself :)
>What secrets lurk in your DMs, Messenger and Signal history? Does your draft blog post actually contain some amazingly insightful observation?
Things that need to stay secret. That's why they are secrets. If my passing means that these things are no longer accessible to anyone ever again? Perfect. Works as intended.