Category: Uncategorized

Weaving The Memory Puzzle Into a Tapestry

Hello there. Here is Digitally Literate, issue #315.

This will be my last post for 2021. I’ll take the remainder of the month off and spend time tinkering with my website, this newsletter, and my social media feeds. Some interesting things coming soon. 🙂

If you haven’t already, please subscribe if you would like this newsletter to show up in your inbox. Reach out and say hello at hello@digitallyliterate.net.

Springboard: the secret history of the first real smartphone

A decade before Steve Jobs introduced the iPhone, a tiny team of renegades imagined and tried to build the modern smartphone. Nearly forgotten by history, a little startup called Handspring tried to make the future before it was ready. This is the story of the Treo.

Read more here.

Why books don’t work

I’m spending a lot of time iterating my information consumption habits. As part of this exploration, I’ve come across the writings of Andy Matuschak.

This post details the reasons why Matuschak believes that books, textbooks, and lectures…traditional vessels for information…in our lives do not work.

In the post linked above, Andy asks the question, How might we design mediums which do the job of a non-fiction book—but which actually work reliably?

To show proof of concept, Matuschak worked with Michael Nielsen on Quantum Country, a “book” on quantum computation. Please be advised, reading this “book” doesn’t look like reading any other book. The explanatory text is tightly woven with brief interactive review sessions, as the authors help weave the memory puzzle into a larger tapestry.

I recognize that many readers of this newsletter, and my work will be equally infuriated and intrigued by the points made in this post. This is one of the main problems I’ll explore during my digital hiatus.


Congress, Far From ‘a Series of Tubes,’ Is Still Nowhere Near Reining In Tech

U. S. Legislators have spent years asking the wrong questions and proposing the wrong legislation. There are relatively simple solutions to many of the issues related to antitrust, data safety, and harmful content, with no need for changes to Section 230.

For years, tech CEOs have been interrogated in televised hearings in Congress while not generating any new laws. The entire purpose of these spectacles seems to be generating Twitter dunks for politicians’ bases. Meanwhile, the EU & Asian countries pass laws that impact US users.

Perhaps policymaking is not the goal of these hearings.


A Body of Work That Cannot Be Ignored

It’s been a year since Timnit Gebru was fired from Google after warning the search giant that messy artificial intelligence can lead to the silencing of marginalized voices.

In this post, J. Khadijah Abdurahman on how technology produces “new modes of state surveillance and control” and what we might do about it.

Racial capitalism’s roadmap for innovation is predicated on profound extraction. AI is central to this process. The next flashpoint over AI is inevitable—but our failure to respond adequately is not. Will we continue to write letters appealing to the conscience of corporations or the state? Or will we build a mass movement?

This week, Gebru also started the Distributed AI Research Institute (DAIR). This is a space for independent, community-rooted AI research free from Big Tech’s pervasive influence.


Twitter’s new privacy policy could clash with journalism

This week, Twitter said it is expanding its privacy policy to include what the company calls “private media.” The current privacy policy prevents users of the service from sharing other people’s private information, such as phone numbers, addresses, and other personal details that might make someone identifiable against their will. Under this policy, users who have shared such data have had their accounts blocked or restricted in a variety of ways.

The new addition to the policy forbids “the misuse of media… that is not available elsewhere online as a tool to harass, intimidate, and reveal the identities of individuals.” Twitter said it is concerned because personal imagery can violate privacy and lead to emotional or physical harm, and this can “have a disproportionate effect on women, activists, dissidents, and members of minority communities.”

Neo-Nazis and far-right activists are making the most of the new Twitter rule and managing to remove photos of them posted by journalists. Twitter is currently reviewing the policy.


In Texas, a Battle Over What Can Be Taught, and What Books Can Be Read

Texas is afire with fierce battles over education, race, and gender. What began as a debate over social studies curriculum and critical race studies — an academic theory about how systemic racism enters the pores of society — has become something broader and more profound, not least an effort to curtail and even ban books, including classics of American literature.

What are schools and teachers to make of these crosscurrents?

Windy: Wind & weather forecasts

I love weather apps. I really love uber-cool, data visualizations of weather using open data.

Windy is an interactive forecasting tool, available at windy.com, on the App Store, and on Google Play.

I’m always looking, and I’m always asking questions.

Anne Rice

The Father of Web3 Wants You to Trust Less. Gavin Wood, who coined the term Web3 in 2014, believes decentralized technologies are the only hope of preserving liberal democracy.

At the most basic level, Web3 refers to a decentralized online ecosystem based on the blockchain. Platforms and apps built on Web3 won’t be owned by a central gatekeeper, but rather by users, who will earn their ownership stake by helping to develop and maintain those services.

Say hey at hello@digitallyliterate.net or on the social network of your choice.

White Nationalism’s Deep American Roots

White nationalism in the U.S. is becoming more visible and more deadly, from marchers in Charlottesville to a gunman at a Pittsburgh synagogue.

Adam Serwer in The Atlantic on the American roots of this movement. He writes that what is judged extremist today was once the consensus of a powerful cadre of the American elite.

SOURCE: The Atlantic

Digital ads are starting to feel psychic

    <blockquote>Tega Brain and Sam Lavigne, two Brooklyn-based artists whose work explores the intersections of technology and society, have been hearing a lot of stories like mine. In June, they launched a website called New Organs, which collects first-hand accounts of these seemingly paranoiac moments. The website is comprised of a submission form that asks you to choose from a selection of experiences, like “my phone is eavesdropping on me” to “I see ads for things I dream about.” You’re then invited to write a few sentences outlining your experience and why you think it happened to you.</blockquote>

The Mind-Expanding Ideas of Andy Clark

The Mind-Expanding Ideas of Andy Clark (The New Yorker)

The tools we use to help us think—from language to smartphones—may be part of thought itself.

The first section of the article follows Clark’s development of idea that our minds must be defined as extended beyond our bodies to include the tools in our environment without which they cannot function:

Clark started musing about the ways in which even adult thought was often scaffolded by things outside the head. There were many kinds of thinking that weren’t possible without a pen and paper, or the digital equivalent—complex mathematical calculations, for instance. Writing prose was usually a matter of looping back and forth between screen or paper and mind: writing something down, reading it over, thinking again, writing again. The process of drawing a picture was similar. The more he thought about these examples, the more it seemed to him that to call such external devices “scaffolding” was to underestimate their importance. They were, in fact, integral components of certain kinds of thought. And so, if thinking extended outside the brain, then the mind did, too.

It then describes his moving into artificial intelligence and robotics, encountering the work of Rodney Brooks at M.I.T:

Maybe the way to go was building an intelligence that developed gradually, as in children—seeing and walking first. Perhaps intelligence of many kinds, even the sort that solved theorems and played chess, emerged from the most basic skills—perception, motor control…While constructing a robot that he called Allen, Brooks decided that the best way to build its cognition box was to scrap it altogether. …It was controlled by three objectives—avoid obstacles, wander randomly, seek distance—layered in a hierarchy, such that the higher could override the lower…It would make no plans. It would simply encounter the world and react.

Robots like Allen… seemed to Clark to represent a fundamentally different idea of the mind. Watching them fumble about, pursuing their simple missions, he recognized that cognition was not the dictates of a high-level central planner perched in a skull cockpit, directing the activities of the body below. Central planning was too cumbersome, too slow to respond to the body’s emergencies. Cognition was a network of partly independent tricks and strategies that had evolved one by one to address various bodily needs. Movement, even in A.I., was not just a lower, practical function that could be grafted, at a later stage, onto abstract reason. The line between action and thought was more blurry than it seemed. A creature didn’t think in order to move: it just moved, and by moving it discovered the world that then formed the content of its thoughts.

Then, to how does the brain make sense of the world.

To some people, perception—the transmitting of all the sensory noise from the world—seemed the natural boundary between world and mind. Clark had already questioned this boundary with his theory of the extended mind. Then, in the early aughts, he heard about a theory of perception that seemed to him to describe how the mind, even as conventionally understood, did not stay passively distant from the world but reached out into it. It was called predictive processing.

It appeared that the brain had ideas of its own about what the world was like, and what made sense and what didn’t, and those ideas could override what the eyes (and other sensory organs) were telling it. Perception did not, then, simply work from the bottom up; it worked first from the top down. What you saw was not just a signal from the eye, say, but a combination of that signal and the brain’s own ideas about what it expected to see, and sometimes the brain’s expectations took over altogether.

One major difficulty with perception, Clark realized, was that there was far too much sensory signal continuously coming in to assimilate it all. The mind had to choose. And it was not in the business of gathering data for its own sake: the original point of perceiving the world was to help a creature survive in it. For the purpose of survival, what was needed was not a complete picture of the world but a useful one—one that guided action. A brain needed to know whether something was normal or strange, helpful or dangerous. The brain had to infer all that, and it had to do it very quickly, or its body would die—fall into a hole, walk into a fire, be eaten.

So what did the brain do? It focussed on the most urgent or worrying or puzzling facts: those which indicated something unexpected. Instead of taking in a whole scene afresh each moment, as if it had never encountered anything like it before, the brain focussed on the news: what was different, what had changed, what it didn’t expect…This process was not only fast but also cheap—it saved on neural bandwidth, because it took on only the information it needed—which made sense from the point of view of a creature trying to survive…To Clark, predictive processing described how mind, body, and world were continuously interacting, in a way that was mostly so fluid and smoothly synchronized as to remain unconscious.

And, summarizing paragraphs,

He knew that the roboticist Rodney Brooks had recently begun to question a core assumption of the whole A.I. project: that minds could be built of machines. Brooks speculated that one of the reasons A.I. systems and robots appeared to hit a ceiling at a certain level of complexity was that they were built of the wrong stuff—that maybe the fact that robots were not flesh made more of a difference than he’d realized. Clark couldn’t decide what he thought about this. On the one hand, he was no longer a machine functionalist, exactly: he no longer believed that the mind was just a kind of software that could run on hardware of various sorts. On the other hand, he didn’t believe, and didn’t want to believe, that a mind could be constructed only out of soft biological tissue. He was too committed to the idea of the extended mind—to the prospect of brain-machine combinations, to the glorious cyborg future—to give it up.

In a way, though, the structure of the brain itself had some of the qualities that attracted him to the extended-mind view in the first place: it was not one indivisible thing but millions of quasi-independent things, which worked seamlessly together while each had a kind of existence of its own. “There’s something very interesting about life,” Clark says, “which is that we do seem to be built of system upon system upon system. The smallest systems are the individual cells, which have an awful lot of their own little intelligence, if you like—they take care of themselves, they have their own things to do. Maybe there’s a great flexibility in being built out of all these little bits of stuff that have their own capacities to protect and organize themselves. I’ve become more and more open to the idea that some of the fundamental features of life really are important to understanding how our mind is possible. I didn’t use to think that. I used to think that you cou