Starting With CSS

This project was an invitation, intended to make us use the CSS box model. The assignment used an in-line style sheet rather than an external one. In completing the project I had two problems. The constraint of an inline style sheet wasn’t a good fit for the assignment and I had challenges dealing with the border attribute of the box model.

Web designers will debate forever how to decide which projects can use style attributes attached to content elements, which need inline style directives, and which are “big enough” to justify external style sheets. It seems obvious that multi-page sites need external style sheets, so one can apply the same styles to multiple pages. This brings us to the single web page and whether to use an inline sheet, or styles attached to each element.

The benefits of inline style sheets emerge when styles are reused in a document. Rather than copying the style attribute over and over again, the designer can change it once. Unfortunately, this was not that kind of document. It was short and had minimal reuse of styles. Style attributes on individual elements would have been adequate to the task. Perhaps it’s a bit much to call this a problem, since the parameters were driven by the assignment learning outcomes rather than the needs of the particular design task.

My second problem was all about borders. The W3C sandbox to which we were referred showed a nice green border. When I modeled my style sheet on that example — no border. Since we were restricted to a nearly featureless text editor, I wasted time looking for missing punctuation and unclosed tags. That was not the problem.

Like most web designers it was off to search engines. It turns out that there is a border-style attribute included in the box model, but not shown in the W3Schools sandbox. The default value for border-style is “none”. When I added a border-style=solid attribute to my div style declaration, voila , a border. I was reminded that even normally trustworthy sites can have glitches.

Returning to HTML Coding

This week I had an assignment to hand code a minimal personal web page, Including a heading, picture and some formatting. It reminded me a lot of the personal web pages I hand coded in the mid 1990’s. While I understand the benefit of knowing what the HTML “under the hood” actually does, the experience was…..frustrating.

My difficulties included being careless about closing quotes on attributes, and having to do lots of page reloads to get things like image borders the way I wanted them.I resolved them the same way I did in the 1990’s, by saving and reloading in the web browser frequently. This was frustrating because it’s not 1995 anymore and there are better ways. Even a basic IDE with auto-complete would make sure all the brackets and quotes were paired, and a cascading style sheet would make formatting changes easier to manage. I also had to cope with the changes between HTML 4 (in the 1990’s) and HTML 5. I ended up having to look up more attributes than I expected. Things that used to be separate attributes (font-color, text-align, etc.) are now all varieties of the style attribute for example.

What did I learn? First of all, that I now appreciate user friendly interfaces at which I used to scoff, When you’re young, you do things the hard way because it shows off your expertise. As you get older you discover you have better things to do with your time. I’m also beginning to suspect that going tag by tag and attribute by attribute may not be a good match for even small real world web projects. I think hand coding is better for tweaking a website than building a website. Hand coding a web page is like driving a manual transmission car. While there was a time when doing that was necessary, that time has passed, and those who still drive stick shift or hand code HTML do it for very granular control. For most people it’s not worth the trouble.

On Accessibility

Accessibility was unfortunately not a primary consideration of most early web design.  This is perhaps ironic, as the one of Berners-Lee’s design principles in creating HTML was that information was that things were to be tagged semantically (<em> and <strong> ) rather than styled (<b> and <i>). Semantic markup allows browsers to  display material in a way appropriate to the limitations of software and hardware or according to user preference.  For example, a browser designed for a monochrome monitor would choose some formatting other than color to indicate hyperlinks. The rejection of Berners-Lee’s semantic markup perhaps reached its apex with the introduction of the infamous <blink> tag in the mid 1990’s. Not surprisingly, although standard bodies such as W3C have created guidelines, much of the improvement in web accessibility has been driven by government regulation of both government websites and those produced by entities receiving government funding. 

While accessibility is an ongoing process, there are a few aspects that should be a high priority , because of their frequency in web design:

ALT-Text – Since NCSA Mozilla, the web has been a visual medium.  A page filled with only text comes across as quaint and even retro.  However, this means that images need text descriptions.  The HTML standard addresses this with the <img> tag alt attribute, if designers take the time to use it. See below for a site with do’s and don’ts for writing good ALT text.

Color choices – There are many ways to combine colors on a web page. Some of them are not effective. Beyond the difficulty caused by low contrast pairings (such as a pale pastel on white), there are five different kinds of color vision deficiency (see Resources below). Put that all together and you have several ways you can make your site difficult for some of your users to read.

Transcripts / captions – If your web site includes audio and/or video, transcripts and/or captions are important for any users who might have a hearing disability.  The general advice to write a script is helpful, as this should end up being very close to your transcript.  Auto-captioning/ transcripting is improving, but you still need to review and correct the output before adding it to your media.

Resources

  • Guidance on Web Accessibility and the ADA
  • WAVE
    • htttp://wave.webaim.org
    • This suite of tools, from Utah State University , helps web designers test for accessibility.

Reintroducing Myself

Since January, I’ve been enrolled in the M.Ed program in Educational Technology at the University of Arkansas. As is not uncommon with such programs, I’m being asked to blog as a course requirement. It feels a bit odd to write the 500 word bio, since this is actually the third iteration of my personal blog. When you put them all together, they run to just over 250 posts and 19 years. For those 19+ years, I’ve been involve52103d in educational technology in some capacity. Almost all of that involvement was at the two-year college level.

While there is some tendency to think of blogs and the things that grew from them (podcasts, email newsletters, substacks, etc.) as a broadcast platform, I’ve always leaned more toward the “outboard brain” model (Doctorow, 2002) It says something about the transience of all the ed tech we’re busily doing that I had to pull that link from the Internet Archive Wayback Machine, but I suppose we can’t all be Ea-Nasir.

It’s probably also a tell that I’ve named this blog in hypothetical West-Saxon (explanation here – It does have the advantage that a search using the actual blog title will find it, even amongst all the AI) and I labeled my various attempts at web based bookmarking Πίνακες, after the lost catalog of the (also lost) Library of Alexandria. This has something to do with wanting to understand all of this new stuff as part of a long tradition of organizing, preserving, and disseminating knowledge.

Part of the rubric for this post expects something about purpose and topics to be discussed. In the short term, topics to be discussed will derive from course blog assignments. I’ll use a category so those who need to can find them. My non required posts tend to focus on two broad questions:

  • How can technology allow learners to shape their own learning?
  • To what ends aspires this whole education enterprise and how does the availability of various technology shape those ends?

Identity After the Blue Check

This week’s implosion of Twitter has set off an unprecedented migration to alternatives, particularly the ActivityPub based Fediverse. Since that implosion included a complete breakdown of  the verification system (What do blue checks mean today? What will they mean tomorrow?) lots of people started off thinking about identity and impersonation in a decentralized space.

The previous verification system at Twitter was the kind of centralized approach most of us are used to.  Twitter publicly attested the identities of about 400000 accounts belonging to institutions, brands, and various celebrities in the same way most people in the developed world rely on an ID card issued by a government.  To think about how to build an identity system without a central authority, we have to look backwards. Before national ID cards, identity was largely managed through social connections.  You were introduced to someone, in person, by a common acquaintance whom you both trust.  That introduction attests your identities to each other.  This is in essence the web of trust that Pretty Good Privacy tried to build via key-signing.

Of course, the process becomes more complicated when you aren’t in person.  What if that letter of introduction is a forgery?   Oddly, existing social media platforms worked on the electronic version of this problem in a roundabout way, with photo tagging.  When I take a picture of someone and tag them when I post it.  I’m attesting that the account I tagged belongs to the person whose image I posted.  If you also recognize that person’s image, you can take that tag as evidence that a particular account belongs to a particular person.

In the Fediverse, such identity verification as there is relies on having control of some other website.  Mastodon, the most popular ActivityPub implementation, allows you to place a link to your profile with a rel=”me” attribute on your website,  When you add the address of that page to your profile, it appears with a green check.  What this actually does is show that the same entity controls the website and the Mastodon account.

If you control a domain, you have another option, creating a Fediverse server within that domain.  Since you control the domain, you control the Fediverse server. This method is an option for institutions as well as individuals.  mastodon.archive.org  has been launched by the Internet Archive and only IA employees are allowed to have accounts on this instance.  Effectively, IA is publicly attesting the identity of the person attached to each of those accounts. It will be interesting to see if other institutions follow suit.

More Thoughts on the Fediverse

As the on again off again acquisition of Twitter went on again, focus turned back to mastodon, a federated alternative.  As another wave of users dip their toes in the waters, I wanted to share some thoughts.

Scale

The fediverse is predicated on replacing a single site having one set of policies with a network of sites.This is both technically resilient and resistant to the whims of a single owner. There are, however, different ways to realize this concept.

The web interface of mastodon includes a local timeline, A separate local timeline makes sense when each instance is a distinct community. Initially, standing up and administering a server was not for everyone. Mastodon was designed with an administrator role and those administrators are empowered to set policies on their instances. The model here is a collection of “small towns” each with its own culture.

But then something happened — mastodon became popular and more accessible server models became available, including pre- configured VPS’s and hosted options. This opened a new use case, that of the single user instance.  Here, the local timeline becomes unimportant.  This is more like Twitter, which doesn’t have such a thing.  The official iOS Mastodon client has gone as far as not  even showing the local timeline. We have an interesting scenario in which not only policies and standards but also what it means to moderate vary from instance to instance.

Mastodon and protocols 

Mastodon runs on ActivityPub, the under the hood protocol that allows instances to communicate with each other.  It also supports publishing RSS and Atom feeds.  There are other applications that support ActivityPub. For example, A WordPress site can, with the right plugin, publish an Activitypub stream.  This allows a mastodon user to follow that site as if it were a mastodon user. However, that WordPress plugin generates ActivityPub but doesn’t parse it, meaning if you reply in Mastodon, nobody will see it. You end up with different apps using the same protocol to different ends.  Combine this with RSS support and you need to think about your data flows.

Update: Because of incompatibilities with other plugins, I did not test this on my own site. The WordPress site I followed to test was using a custom configuration that I mistook for a default setting.

Ian Bogost, discovery, and why Dave Winer was right

Inspired by the most recent twitter hack, Ian Bogost wrote this week in The Atlantic about the failings of decentralization. I was struck by one passage:

“Twitter isn’t just a place for memes or news, or even presidential press releases meted out in little chunks. It’s where the weather service and the bank and your kid’s school go to share moment-to-moment updates. Though seemingly inessential, it has braided itself into contemporary life in a way that also makes it vital.”

The thing is, you don’t need a central site to collect these moment-to-moment updates. RSS has been around for longer than Twitter. There used to be little RSS buttons on lots of web sites. What would our online world look like if Google hadn’t killed Google Reader and RSS had hung on.

It occurs to me that inadequate discovery tools were the hole in the decentralized internet. Google got its foot in the door of the web by solving the search problem. Once that was centralized, Google ( and then Facebook, and then Twitter) , bit by bit, centralized everything else. Was there a way to make finding things less centralized. Is there one today as we look at web alternatives like Dat or IPFS?