Who imagined epistemology would be cyclical?

I am eager to read Mark Pesce’s new essay , “The Last Days of Reality“.  Unfortunately it’s subscriber only at the moment. You can listen to his talk at one of the launch events. At first glance the notion of reality ending used to seem to be a stretch. Now we have “fake news” and Vox writing about epistemic crisis.

Technology has now gotten so advanced , it can obscure standards of truth, whether that’s changing the weather in a picture or the text of an audio.  What happens to truth when images and audio are no longer authoritative?  We actually have an idea of the answer to that question, thanks to the past.

James Burke described the opposite transformation in episode four of The Day the Universe Changed, “A Matter of Fact.” The episode begins with a description of epistemology prior to the printing press.  Burke argues that truth, in a mostly illiterate society, was grounded in relationships. Something was true because someone you trusted said it was.  Burke points out that this epistemology is preserved in things like oral personal testimony in court. He then goes on to explain how printing changed that.

Is it possible that “everything old is new again?” We see this in the way social media has changed news consumption.  More and more of the news we see comes from the things our friends bring to our attention.  Our news world comes from the people we trust.  Sound familiar?

There’s of course a huge difference in this model now and how it worked hundreds of years ago.  In the middle ages, most people communicated face to face.  You knew that your source was who they said they were because you looked them in the eye, with a few notable exceptions like Martin Guerre. Today, things aren’t that simple. Given the relative lack of encryption in person to person communications, there are few assurances that the email, tweet or facebook comment comes from the person who is its alleged author.  It’s possible that improvements in usable encryption will make it easier to verify identity online. If we are going to rely on relationships for our personal reality, this sort of verification is very important. Even if verification improves, what if , as Vox alleges is already happening, there are no shared authorities?

 

 

Meeting the General Education Computer Requirement with a Course on Technology and Society

For a number of years, college and university general education requirements have , at many institutions, included some sort of mandatory course on computers and technology.  Several decades ago when I met the requirement, I did so by completing a course that was mostly about how to use AppleWorks.  These days, students are more likely to use Microsoft Office and various web applications.  If we are educating our students for citizenry, that’s not good enough anymore.

For the last couple of years, I’ve been thinking about what I want my children to know about computers, technology, and the net, and I’ve decided that it’s not MS Office.  It’s not even how to code. Instead what I want them and everyone to learn about is how technology is changing our society.

As a thought exercise, I’m going to set down in writing a sketch of what such a course might look like.  I haven’t done much comparison research, and I’m sure some other institutions have already created such a course.  Nevertheless, here it goes with  a topic list and some suggested readings:

A Brief history of networked, decentralized , and recentralized computing.

A Declaration of the Independence of Cyberspace – John Perry Barlow
The Web We Lost
– Anil Dash (Video)
“Reclaiming the Internet” with Distributed Architectures: An Introduction – Francesca Musiani and Cécile Méadel
The Mission to Decentralize the Internet – Janus Kopfstein

Algorithms

Big Data: It’s Worse than You Thought – Frank Pasquale
Weapons of Math Destruction – Cathy O’ Neil
We’re Building a Dystopia Just to Make People Click on Ads – Zeynep Tufekci (video)

Blockchain/DHT

The Trust Machine – The Economist

Data Collection and Surveillance

Snowden and the Future – Eben Moglen

Encryption

Don’t Panic: Making Progress on the “Going Dark” Debate – Berkman Center for Internet and Society, Harvard University
The Case Against a Golden Key – Patrick Ball

The Control of Technology and the Technology of Control

Lockdown: The Coming War on General-Purpose Computing – Cory Doctorow
Twitter and Tear Gas – Zeynep Tufekci

I recognize that many of my sources here have a leftward lean. Does anyone have suggestions for:

  1. Writers who are more politically conservative but have a good understanding of the capabilities and limitations of technologies?
  2. Topics and/ or resources I should have included but didnt?

Aggregating the Decentralized Social Web

In the wake of recent FCC plans to repeal net neutrality regulations, people are starting to talk about decentralization, both of infrastructure and of the platforms we use to communicate on the Internet.  The latter has moved more quickly than the former, since it’s arguably easier to write code than to lay fiber optic cable.  In the last few months, I’ve experimented with :

  • Mastodon
  • Beaker Browser
  • GNU Ring
  • Matrix
  • ZeroNet
  • RetroShare
  • Twister
  • Patchwork (SSB)
  • Friendica

Note that those indicated in italics are more web replacement than social network platform.

That’s quite a few apps to open regularly.  Wouldn’t it be nice to aggregate this content so you could follow everything from one app. There have been some attempts at this (seesmic/tweetdeck/etc.) aimed at the major commercial social networks, but , since feeding into an aggregator undermines the revenue model of the social network (remember how Twitter used to support RSS?) they were either acquired or left to wither. Since decent platforms don’t have a revenue model to protect, why can’t they be more aggregation friendly? Mike Caulfield suggested that smartphone OS’s were functioning as aggregators via notifications.

There are actually three problems to solve, reading, which is relatively easy,  posting, which is harder, and social graph management, which is quite complex

Reading various streams in an aggregator would be most easily accomplished if various decentralized platforms would support stream output as password protected RSS.  Twitter was on the right track before revenue growth got in the way.  Subscribing to my personal timeline(s) with my favorite RSS reader would bring everything together, especially if I had a reader that listed items chronologically independent of source.  The potentially difficult part is dealing with and indicating private v public messages.

Posting is more challenging.  Not only does the client need to implement correctly the API’s of various platforms and keep track of what options and constraints to present to a user depending on which platforms they were posting to.  It looks as if Withknown has made some progress in this area with syndication plugins.

Managing your social graph is sort of the next level.  One of the disadvantages of centralized social networks is that Twitter/Facebook/etc. maintain your social graph and can therefore mine it for data and monetize it.  Several years ago, VR celebrity Mark Pesce (famous for his invention of VRML) did some development on Plexus, software that he described as “plumbing for the social web.”  The premise here was that your social graph would live on your device.  This would be possible because you would create multiple accounts on each social network, one for each friend/follower relationship.  Highly compartmentalizing your social presence is good for privacy but makes discovery more challenging, as software on your end has to parse your streams and sort out connections on your social graph.

How do we decentralize the web without so decentralizing our own social presence that it becomes unmanageable?

Weapons of Math Destruction Part 4

These chapters (actually they were last week’s) cover employment . Here’s Bryan’s prompt.

On the hiring side, I’m not sure whether algorithmic arbitrariness or human arbitrariness is worse.  I have a sense that, distinct from the expected biases (ethnicity, gender, geography/wealth)  algorithms might bias for similarity.  That is, they bias against candidates who have the larger skills to do the job, but whose previous job titles or majors aren’t a close word for word match for a job description.  Of course humans might be just as likely to have that bias, but a human, if they wanted to think “outside the box”  could at least be metacognitively aware of it.

I found the next chapter “Sweating Bullets”  more alarming. The core of the problem is that outside of widget production for a factory worker or sales volume, the link between what an individual worker does and an institutional KPI is often tenuous.  My instinct is that bad algorithms full of second or third order proxies make this much worse that a human based system with safeguards (such as something like 360 evaluation)

Did anyone else find the sociometric badge used in the call center (132)  seriously creepy?

As to one of Bryan’s questions, about whether boycotts can provide a meaningful check on this sort of thing, it seems to me it might work in the public sector where transparency can be enforced via FOIA, but I have little hope for the private sphere.  Boycotts sound good, but are rarely well enough organized or maintained to provoke real change.

Notes and Quotes

“…we’ve seen time and again that mathematical models can sift through data to locate people who are likely to face great challenges, whether from crime, poverty, or education. It’s up to society whether to use that intelligence to reject and punish them — or to reach out to them with the resources they need.” (118)

“The root of the trouble, as with so many other WMD’s, is the modeler’s choice of objectives. The model is optimized for efficiency and profitability, not for justice or the good of the ‘team.’  This is, of course, the nature of capitalism.” (129-130)

I was struck the other day by how similar Cory Doctorow’s whuffie system (from Down and Out in the Magic Kingdom), the rating system in the Black Mirror episode “Nosedive” and the Chinese social credit system I described in last week’s post are.

Weapons of Math Destruction Part 3

Here’s last week’s prompt for the Weapons of Math Destruction Book Club, I have chosen to ignore the provided questions, however. (Sorry, Bryan)

My big takeaway from these chapters is the importance of the decisions that are made about how to use data.  Both predatory recruiting and nuisance policing seem to start with explicitly harmful (the former) or flawed (the latter) justifications.  This makes the issue one of big data making it easier for people to do bad things.

The description of how the Chicago predictive policing initiative included social network analysis reminded me of the Social Credit system China is developing. (See this article from the Independent or this one from the Financial Times [warning:paywall]) Incidentally, the Independent article has a video and I was shown a pre-roll Lexus ad that was in Mandarin.

Unlike the Chicago system, where one’s core is known presumably only to police, the Chinese system, which includes in the “social credit score” algorithm your activity on social media and the scores of your friends, makes those scores public, encouraging you either  to lean on your friends with low scores in an effort to improve their behavior or to shun them. Both approaches will improve the social component of your score. I wonder to what extent social credit scores are used in the Western world and we just don’t know about it yet.

 

 

NOTES AND QUOTES*

(96) Justice cannot just be something that one part of society inflicts upon the other.
(102) Part of the analysis that led police to McDaniel involved his social network.

 

 

*Yes, I’m aware that it should probably be Notes ans Quotations, but I will sacrifice grammatical accuracy for rhyme scheme

Weapons of Math Destruction Part 2

I’m moving on to Chapters 2 and 3 of Weapons of Math Destruction .  In this week’s prompt, Bryan asks:

  • If creating or running a WMD is so profitable, how can we push back against them?

By making them less profitable.  The only way I see to do this is to require some human intervention in the decision processes these algorithms facilitate.  Making a person responsible for verifying the algorithmic outputs would at least improve accountability.  In the event of egregious harms, the person who signed off on the algorithmic output could be held to account for his or her decision.

  • Do you find other university ranking schemes to be preferable to the US News one, either personally or within this book’s argument?

I don’t know enough about them to say.

  • At one point the author suggests that gaming the US News ranking might not be bad for a university, as “most of the proxies… reflect a school’s overall quality to some degree” (58).  Do you agree?

This doesn’t matter.  Even if the proxies are good proxies, the fact that it’s a ranking system creates the arms race condition which forces colleges to game the system aggressively by doing things like rejecting qualified applicants who are unlikely to enroll.  O’Neil discusses this decline of the safety school.  The root of the problem is the role of reputation in the whole system.


Not surprisingly the discussion of college rankings in Chapter 3 resonated strongly with me for two reasons:

I applied to colleges just at the end of Ivy League collusion on financial aid offers.  I wonder how much the early effects of the US News rankings might have affected me as an Ivy League applicant.

As the parent of teenagers, I had only a vague sense of how the admissions process has changed since I was a college applicant. I worry for my children.

 

Note:

“However, when you create a model from proxies, it is far simpler for people to game it.” (55)

 

Weapons of Math Destruction Part 1

Bryan Alexander is facilitating an online book club reading of Cathy O’Neil’s Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.  I am about two weeks behind (typical), so I will focus on just a couple of his questions for Part 1.

A. “What would it take for an education algorithm to meet all of O’Neil’s criteria for not doing damage?”

The big problem with  almost all educational measurement is the use of clumsy at best proxies (O’Neil uses the term) for the learning  we want to measure.  Since the fundamental output metric is a test with all the possibilities for manipulation that suggests, when we then try to measure what input changes improve that output we are are at least two levels of abstraction removed. Until we can measure educational outcomes some way other than by means of a crude manipulable proxy, I’m not sure we can fix this.

B. “What are the best ways to address the problem of “false positives”, of exceptionally bad results, of anomalies?”

I think the best way to solve this problem is to place some limits (preferably not determined by an algorithm) on the kinds of decisions we allow algorithms to make without human input.  The potential harm of a bad book recommendation from Amazon is much lower.  That probably means  thoughtful review of every adverse algorithmic recommendation by at least one live human being. Of course, thus undermines the efficiency and scale that algorithms are designed to create.  An important step is to acknowledge that algorithms are not neutral even if they manage not to be arbitrary.  They encoded the assumptions and biases of their creators, and acknowledging those assumptions and biases is a key part of the design process.

The DC schools example draws attention to the importance of checking for flawed input data.  After all, the algorithm is only as accurate as the data you feed it.

Notes: O’Neil’s three criteria for a Weapon of Math Destruction are “opacity, scale, and damage.” She uses the initialism WMD.  I wish she had come up with something else, because of the namespace confusion with chemical, biological and nuclear weapons.

Opacity makes me think of Frank Pasquale’s The Black Box Society, which I haven’t read yet.  The synopses of Pasquale’s book make me wonder how his and O’Neil’s work intersect.

 

 

 

Tooting Alone

This month, the early adopters are all on mastodon. Mastodon is actually a server implementation for OStatus (which used to be StatusNet, which was originally on identi.ca) the TL;DR of Ostatus is “like twitter, but federated”. As of this morning there are almost 900 active instances. Since the software is open, different instance administrators can set their own policies and users can find an instance whose culture agrees with them.

Mastodon also has an option to operate a single user instance, and this is where it gets less clear. Mastodon is designed to show three different timelines, the users personal timeline, a public timeline for the local server, and a federated timeline. On a single user server the local timeline will show the “toots” (Yes, that’s what they call a status post) of the instance’s one user, and the federated timeline will look very similar to the single user’s personal timeline. In managing your own presence on the network, you simultaneously isolate yourself from it. It’s possible however that this won’t end up mattering very much. I can’t remember the last time I looked at the Twitter public timeline. If OStatus ends up working the same way, it won’t matter how many people are on your instance, because you will interact with the network through the people you follow, even if they are on many different instances. While the local timeline won’t show much , the federated timeline, which is sort of a second degree network (see https://cybre.space/users/nightpool/updates/13933 ) looks as if it may end up, on a single user server, as a very personalized feed.

This ties in to the IndieWeb movement with its idea of POSSE (Publish own site, syndicate elsewhere) and there are already connectors to publish from tools like withknown to mastodon. Withknown is great for publishing, but not very good for aggregating. There is always RSS, and in fact mastodon autogenerates atom feeds per user (site.tld/users/username.atom). This leaves you using one application to read and another one to reply. I really wish something like Mark Pesce’s Plexus (https://github.com/mpesce/Plexus) was still active. How hard would it be to build a personal dashboard that would bring together RSS reading / OStatus / blogs /etc.?

Social Media and Tool Creep

Last week, Mike Caulfield lamented that social media is poorly suited to enhancing human potential. If you think about it, this shouldn’t surprise one too much, since it wasn’t designed for that.  Facebook was, after all, first and foremost a social tool, a virtual version of the paper books new college students resorted to in ages past to figure out who that cute guy/girl in your English class was.

For the task for which they was originally designed, fostering social connections between people, Facebook, Twitter and other social platforms work well, but then something happened.  As social platforms moved to the center of our online lives we wanted them to be the hubs not just of our social interactions, but of our information gathering.  This dovetailed nicely with the platform creators quest to grab, quantify and monetize more and more of our attention, but, as Mike points out, was not necessarily good for us.

D’Arcy Norman quoted an old post that touched on the same issue.  In 2008  he wrote about what he recently dubbed real-time toll.

Every time I read an update by someone that I care about, I think about that person – if only for a second – and my sense of connection is strengthened.

But, I fear that the strengthened social connections are not worth the cost borne in superficial thinking.

This led me to a little experiment. I looked at my Facebook activity feed for the almost completed month. I’ve only interacted with about 75 entities, and two thirds of those are people in the county I live in.  This comes with the usual caveat that it includes outbound plus inbound tags but not inbound likes and reactions.

Maybe the key to managing D’Arcy’s real time toll is to only follow people you care about enough that whatever superficial thinking it causes is worth it.

I’m going to presuppose that social networking sites are not very good tools to expand human potential.  The ratio of signal to the noise of social interaction is just too low.  What would such a tool look like?  Is a good list of RSS feeds adequate, or is something like fedwiki, wikity, or a choral explanations platform necessary?  If you end up with something that isn’t extremely decentralized, how do you generate beneficial network effects while keeping the signal to noise ratio high enough to generate value?

Verifying Academic Credentials with Blockchain

This morning, a college classmate posted a link to this Campus Technology article on blockchain based transcripts.  It turns out the University of Nicosia  is already doing this. Campus Technology used the d- word , disrupt, to describe the potential of this approach.  On the plus side:

  • this would allow verification of credentials without contact with the issuing institution.  That would seem to save lots of time and trouble in registrars offices everywhere.
  • the permanence of the ledger would mean that it wouldn’t matter if a credential issuing entity ceased to exist
  • The lack of having to produce paper trails might make traditional institutions more willing to offer micro credentials

Pitfalls include:

  • Security – I’m a blockchain novice, but my understanding is that the ledger is quite secure because so many users are verifying it.  That said, even more important than actual security is the perception thereof.  It may take a long time before credential audiences (education institutions, employers, etc.) trust blockchain credentials.
  • Privacy – Blockchain records are permanent and public.  How do you  ensure that only authorized viewers can see the details of a credential (courses and grades)?  What if you don’t want to publicize your attendance at a particular institution?