

I mainly use it on fried potatoes, and I’m open to experiments, perhaps with lentil salad. I am familiar with Sarson’s and managed to find another bottle of that but I would like to try more varieties of malt vinegar. Saw a small bottle of lambic-based vinegar in a speciality shop and didn’t buy it because the price is a bit high (€14).
I was only there once or twice in off hours. I think I was there once on a Sunday (normally closed all day so only open to after hours members) and once in the evening. It was quiet as I recall but I guess I’ve not made use of it enough to have an idea. It’s not overly busy in the after hours.
W.r.t. alcohol, the rules forbid eating and drinking in the library, but water is exceptionally allowed. I don’t know if they review the video without cause, but if someone breaks the rules, their after-hours access is terminated.
In Brussels there is a library that’s “open” as late as 22:00. There’s an after hours program where you register for after hours access, sign an agreement, and your library card can be used to unlock the door. Staff is gone during off hours but cameras are on. Members are not allowed to enter with non-members (can’t let anyone tailgate you incl. your friends).


As far as we know, Google is not giving up any data. The crawler still must store a copy of the text for the index. The only certainty we have is that Google is no longer sharing it.


Here’s the heart of the not-so-obvious problem:
Websites treat the Google crawler like a 1st class citizen. Paywalls give Google unpaid junk-free access. Then Google search results direct people to a website that treats humans differently (worse). So Google users are led to sites they cannot access. The heart of the problem is access inequality. Google effectively serves to refer people to sites that are not publicly accessible.
I do not want to see search results I cannot access. Google cache was the equalizer that neutralizes that problem. Now that problem is back in our face.


From the article:
“was meant for helping people access pages when way back, you often couldn’t depend on a page loading. These days, things have greatly improved. So, it was decided to retire it.” (emphasis added)
Bullshit! The web gets increasingly enshitified and content is less accessible every day.
For now, you can still build your own cache links even without the button, just by going to “https://webcache.googleusercontent.com/search?q=cache:” plus a website URL, or by typing “cache:” plus a URL into Google Search.
You can also use 12ft.io.
Cached links were great if the website was down or quickly changed, but they also gave some insight over the years about how the “Google Bot” web crawler views the web. … A lot of Google Bot details are shrouded in secrecy to hide from SEO spammers, but you could learn a lot by investigating what cached pages look like.
Okay, so there’s a more plausible theory about the real reason for this move. Google may be trying to increase the secrecy of how its crawler functions.
The pages aren’t necessarily rendered like how you would expect.
More importantly, they don’t render the way authors expect. And that’s a fucking good thing! It’s how caching helps give us some escape from enshification. From the 12ft.io faq:
“Prepend 12ft.io/ to the URL webpage, and we’ll try our best to remove the popups, ads, and other visual distractions.”
It also circumvents #paywalls. No doubt there must be legal pressure on Google from angry website owners who want to force their content to come with garbage.
The death of cached sites will mean the Internet Archive has a larger burden of archiving and tracking changes on the world’s webpages.
The possibly good news is that Google’s role shrinks a bit. Any Google shrinkage is a good outcome overall. But there is a concerning relationship between archive.org and Cloudflare. I depend heavily on archive.org largely because Cloudflare has broken ~25% of the web. The day #InternetArchive becomes Cloudflared itself, we’re fucked.
We need several non-profits to archive the web in parallel redundancy with archive.org.


Bingo. When I read that part of the article, I felt insulted. People see the web getting increasingly enshitified and less accessible. The increased need for cached pages has justified the existence of 12ft.io.
~40% of my web access is now dependant on archive.org and 12ft.io.
So yes, Google is obviously bullshitting. Clearly there is a real reason for nixing cached pages and Google is concealing that reason.


#digitalExclusion
Shame this is posted on a centralized Cloudflare instance, which causes problems for people using Tor,VPNs,CGNAT,etc:


Among the primary benefits: no commute, flexible work schedules and less time getting ready for work, according to WFH Research.
They forgot: being able to secretly simultaneously work 3 full-time overlapping jobs to triple your income.
If you search, you’ll learn several privacy-abusing ways to do that via enshitified exclusive walled gardens which share the site you’re asking about with US tech giants and treat users of VPNs, Tor, and CGNAT with hostility.
I only listed 2 bad ones (the 1st two) but when you search the first dozen results are shit. What could be more shitty than being directed to CAPTCHAs and other exclusive bullshit in the course of trying to troubleshoot a problem?
Also, the community we’re in here is “nostupidquestions”.
There’s also an onion one but I lost track of it.
I was thinking about doing that. I read that mother of vineger is not necessary, but it speeds up the process. I also read that results are only good with certain beers like brown ales… and I think IPA was given as an example of a bad result (i’m assuming due to the hoppy bitterness).