I’m pretty biased indeed. I’m lucky that I have a well-paid, full time job, and often get annoyed at “antagonistic” activists who share the same beliefs as I do, but insist more on instability in their lives and on not being able to rely on stable and sufficient income. I tend to find them aggressive and have a hard time fighting for them, even though we have the same end goal. This is definitely something I’ve been working on but need to work on a lot more!
Here are my notes on the video. Formatting may be a bit broken so you can also find it on my website.
Advantges of a library economy are:
Based on Murray Bookchin’s The Economy of Freedom.
Usufruct is the freedom of individual or groups to access and use (but not destroy) common resources to supply their needs - as opposed to limitation of access based on exclusive ownership.
Imagine this applied to: libraries of decor, libraries of furniture, libraries of tools. You could borrow cushion, designs, paintings, then switch things out; you could borrow a shovel for a weekend or while you need it.
_A note from Alex:
__My hometown has an art library that belongs to the city library network. They have loads of paintings, and you can borrow 3 paintings for 3 months at a time, for free, with the only obligation being that you need your home to be insured.
Bibliothèques de Grenoble
_
Guaranteed minimum resources to sustain life, that everyone should have access to regardless of their individual contribution to the community.
Libraries provide free access to knowledge (note from Alex: and fun!), but that’s just one component.
Libraries of consumables (food, drugs, toiletries…) might be difficult to imagine. A library economy needs dispensaries of necessities: a cooking collective, with common farming, could work to provide everyone with enough food.
An emphasis on slow fashion by diverse retailers would ensure clothing that lasts, in the style we like. For this we’d need a vast reorientation of all our priorities.
People must choose themselves how they labour and how they leisure. Nothing should be defined by, or limited to, what they contribute themselves. They should always get satisfaction and joy from what they do.
For the things that no one enjoys, find ways to rotate, gamify or transform these tasks.
Based on 5 laws of library science, first conceived by S.R. Ranganathan in 1931.
Visualize pockets of library economies that connect with one another and end up spreading worldwide!
Here’s my answer:
At some point before or during reading, I copy the table of contents of the book so that it’s easy to search for. It’s often enough for internal links, too.
While reading, I usually take a picture of the page and insert it (then, when I’ll process them, I’ll write the actually relevant part in my own words, so I don’t bother transcribing). I can also write comments under the photo if needed. This is my usual way to go, especially for books from the library since I won’t be able to quickly check them if I’m unsure of something.
Sometimes I’ll also just write the page number, which effectively forces me to go back and read the entire page and figure out what was so interesting in there. It’s super efficient to keep things simple and make sure I don’t confuse something that strikes me and something that’s actually worth remembering!
In both cases though you need a processing step (personally, I do that in Obsidian). I think we should always have one, but it’s something to keep in mind :)
Thank you! I used to use Mastodon Simplified Federation which I loved, but it seems that the latest Mastodon update broke it entirely.
From the GitHub page:
The pages crawled are determined by a central server at api.crawler.mwmbl.org. They are restricted to a curated set of domains (currently determined by analysing Hacker News votes) and pages linked from those domains.
The URLs to crawl are returned in batches from the central server. The browser extension then crawls each URL in turn. We currently use a single thread as we want to make use of minimal CPU and bandwidth of our supporters.
For each URL, it first checks if downloading is allowed by robots.txt. If it is, it then downloads the URL and attempts to extract the title and the beginning of the body text. An attempt is made to exclude boilerplate, but this is not 100% effective. The results are batched up and the completed batch is then sent to the central server.
The batches are stored in long term storage (currently Backblaze) for later indexing. Currently indexing is a manual process, so you won’t necessarily see pages you’ve crawled in search results any time soon.
Ah fair. I saw nothing on froid and out of the 6 repositories I found on GitHub this was the only one that’s not in the earliest stages.