Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So you're using web apps that run sqlite? With some logic on a central server, which then updates all the web apps?


No. I mean, I have used sqlite with dbmail (not a web app) in this way sometime late 2000s, but if you're just going to load the table into memory anyway, you might as well skip the SQL and just record the data.

For example, I did a "sandwich store" app that would list orders in JSON. These were written to a log file via a PHP script. The replication tool would copy them to the "home base" (the store itself). When the kitchen would print an order, it would record a log line saying the order was started, and this would be replicated via a different logfile to the two web servers. The user might submit an order and not see it right away, but I hid this using a cookie so you might only notice if you were using two web browsers logged in with the same user. The receiver on the edge would write out files for each order containing the most recently loaded status, so collecting the status just involved asking the servers for the contents of this file. When the order was scheduled to be delivered, another log line would update those files. And so on. Nothing really resembling a "database" here at all -- just files and memory.

Most recently, my ad server (erlang) has a ets table (cached in dets to speed recovery) that contains all of the publisher+targeting details as a key, and the list of matching advertisers. This table gets updated when home-base records a configuration change (say, because an operator adds a new site or advertiser). Configuration changes look something like this:

     {{market,[{id,<<"marketid">>}]},
      {{2021,6,28},{10,40,9}},
      <<"operatoruser1">>,'nodename',
      {patch,market,
          [{patch,demands,
               [{patch,demand,
                    [{id,<<"advertiserid">>}],
                    [{patch,customers,
                         [{patch,market_customer,
                              [{customer,[...]}],
                              [{patch,...},{...}]}]}]}]}]}},
     {{usr,[{id,<<"username">>}]},
      {{2021,6,28},{9,52,21}},
      <<"operatoruser2">>,'nodename',
      {patch,usr,[{patch,accts,[{insert,<<"siteid">>}]}]}},
     {{market,[{id,<<"marketid">>}]},
      {{2021,6,28},{9,50,42}},
      <<"operatoruser2">>,'nodename',
      {patch,market,[]}},
     {{market,[{id,<<"marketid">>}]},
      {{2021,6,28},{9,49,37}},
      <<"operatoruser2">>,'nodename',
      {patch,market,
          [{patch,demands,
               [{patch,demand,
                    [{id,<<"advertiserid"...>>}],
                    [{patch,customers,
                         [{delete,market_customer,[...]},
                          {insert,market_customer,...}]}]}]}]}},
They're actually stored as binary terms in a disk_log. A process reads the disk_log and materialises the in-memory configuration for the web services and rebuilds the ets table. Erlang has a lot of tools that (perhaps unobviously) make this very easy.

And so on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: