Welcome to fasterandworse.com!

Your source for the latest in 90s web technology... and beyond!

Check out our new articles! 

Known Purpose and Trusted Potential

Published: May 16, 2024 - Updated: May 28, 2024

by Stephen Farrugia

On the 13th of May, 2024 OpenAI published their “Spring Update” video with demonstrations of their new product called GPT-4o, pronounced “For? Oh…” The presentation includes a series of smartphone demos where GPT-4o was spoken to and shown things via the camera. There were vague mentions of this model being more capable but no detail about what that actually meant. The presentation was for the new and improved interfaces between person and product. Voice and vision input and voice output.

“For the past couple of years we’ve been very focused on improving the intelligence of these models, and they’ve gotten pretty good. But this is the first time that we are really making a huge step forward when it comes to the ease of use” - Mira Murati, OpenAI CTO – OpenAI Spring Update, May 13, 2024

In one demo, an OpenAI research lead, Barret Zoph, shows GPT-4o a simple equation written on paper with a marker; 3x + 1 = 4. The product correctly identifies the equation and walks him through the steps to solve it.

One of the key improvements of this model, which they repeat often through the presentation, is how much faster it is to respond. They tell us how they’ve worked to move from a staccato style command-and-response interaction to a seamless conversation style. And this demo certainly shows how well they have succeeded.

What this demo also shows is that an immediate response—without hesitation—is an attribute of confidence. When someone responds immediately they are telling you they are sure of what they are saying, or at least, they want you to think they are sure.

Consider the real-world scenario of the algebra demo. Barret asks GPT-4o for help with a very simple equation. Simple enough for everyone in the audience to work out the answer in their heads in advance. This is by design so we’re free to marvel as the product walks him through the problem to reach the correct answer.

The implied use-case here is a child doing their homework without access to a person who can help them. If the child did have access to someone who could help them, GPT-4o would be redundant.

A separate demo video featuring Sal Khan of Khan Academy, and his son, inadvertently highlights this redundancy well. Sal supervises as GPT-4o guides his son through some trigonometry, he nods in approval of each step the product takes in solving the problem.

The voice synthesis is strong, confident, emotive, and most of all, fast in its responses. A human-like voice explaining—without hesitation. It’s impressive, but important to remember that the usefulness would only exist if Sal wasn’t there to help, supervise, or correct any mistakes the bot might make.

In both of these videos the audience and the demonstrators are acting from a position where they have the ability to solve the problem themselves. The implied usefulness relies on there being trust that the model will be truthful and accurate without the knowledgable participants present.

I can’t help but wonder what Sal Khan would do if he reviewed his son’s homework, which was done alone with the GPT-4o tutor, and found that any of it was incorrect.

The elephant-sized problem in the generative AI room is the unpredictable veracity of the responses. As a problem, it should be considered more important than ease-of-use and the creation of an interface that leans full-tilt into people’s tendency to anthropomorphise the product. It’s not just more important, it’s essential.

What OpenAI have presented with GPT-4o is a fresh paint job on a car with a dangerously corroded chassis. A false confidence machine. Improvements to the voice synthesis, response time, and the ability to “butt in” without throwing the bot off, are nothing but very impressive cosmetics—user experience improvements. There is no new evidence that the product is more trustworthy other than it being better at simulating trustworthiness.

The more useful something is, themore complexity people will endure to use it. GPT-4o is a reaction to the inverse. The less useful something is, the less complexity people will endure to use it. This is one of the fundamental motivations for corporate UX endeavours. A fear of potential, or paying, customers being caught in friction long enough to realise they don’t really need the thing.

Another fundamental motivator is a need to hide how the thing works. Knowing how something works is to know how it doesn’t really work as well or as universally as it may seem.

It’s possible that a need to remove complexity that is obscuring a purpose is a motivator as well. But that’s only valid if the party behind the product knows what the purpose is, can clearly explain it, and demonstrate it. If that’s not the case, we’re in the realm of potential over purpose. I’ve written about this in my article, Complicated Sticks.

It is unethical to slap an interface, which convincingly simulates 100% confidence, onto a product which is anything less than 100% accurate, let alone a product that CTO, Mira Murati, calls “pretty good”.

No exceptions; no “it will get better”. If the house doesn’t have a roof, don’t paint the walls.

This does not mean that reduction or removal of complexity is inherently deceitful, but it does mean that the complexity which informs a person, not how, but why something works the way it does can be an important factor in them deciding to use it.

Nothing could make this more evident than the crypto/web3 community’s obsession with “mass adoption” which they generally resolve to being a UX problem. They know that the complexity of crypto is intimidating to non-technical people (crimes and scams aside) so they relentlessly try to remove as much of the complexity as possible.

The unfortunate thing about removing complexity is that you never remove it, but rather, you move it to another place. The other place is always what crypto people like to call a “trusted third party” the very thing that Bitcoin, was created to eliminate.

Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments. While the system works well enough for most transactions, it still suffers from the inherent weaknesses of the trust based model - Satoshi Nakamoto – Bitcoin white paper, October 31, 2008

Knowing how crypto works is key to it being useful. Trusting that crypto works has created, and will continue to create, fraud, crime, and financial hardship.

Coinbase and Binance are successful because the burden of complexity is on them. Every customer is trusting them as a third party. If cryptocurrencies were used according to the sacred word of Satoshi Nakamoto, they would be more like stashing cash in a safe or under the mattress than a high-tech, frictionless, secure, system of value transfer. Every efficiency a cryptocurrency product creates in the interface between person and blockchain is a denial of the core value proposition of cryptocurrencies.

What this translates to is a lack of usefulness, or at least a lack of evidence that it is useful enough to overcome the technical barriers that make it hard to use.

Comparisons to the “early internet” fail at this very point because the early internet, and the early web, both flourished despite agonising—and expensive—connection methods and complicated software that was designed and created by software engineers. Despite this, people were still clawing at their keyboards to get online.

Generative AI products being used for medical diagnosis, self-driving cars, or tutoring a child in mathematics, suffer from the same burden of the spectrum of knowing and trusting. If the power of AI is to mitigate or completely remove human error we’re either 100% certain of its reliability or we’re led to believe that it is 100% reliable. The former is impossible, the latter is a design challenge.

That design challenge is also known as marketing. Because GPT-4o and the like are not technologies, they are products that are being marketed. Knowing what these things can’t do helps understand the problems that will arise when these things are used anyway. The goal of the massively funded startups behind these products is to market the awareness of those problems away.


4 Comments

Complicated Sticks

Published: May 10, 2024 - Updated: May 16, 2024

by Stephen Farrugia

This week Apple shared an ad for the newest iPad which shows musical instruments, arts media, various creative tools, and entertainment products stacked inside a giant hydraulic press which, predictably, crushes them into the form of the new device.

Tools of human creativity being destroyed and replaced by a soulless aluminium slab, also predictably, did not go down well.

The ad is still available on YouTube at the time of publication despite the pissed off artists, musicians, and, apparently, Japan.

I’ll be damned if I’m going to add to the inane discourse around how an advertisement is received, especially an Apple one. What interests me here is the way the ad illustrates a product design trend which is becoming the unfortunate standard of the global tech industry.

This is the trend of making what I call complicated sticks. Complicated sticks are complex tech products that are useful for everything and nothing in particular. Evident in these wispy ads that try to give a blanket impression of good vibes and positivity that’s more suited to fashion, fragrances, and marketing within flooded product categories, not supposedly innovative tech that helps us do things we couldn’t do before.

It’s an ad for blockchains, NFTs, and generative AI. It’s an ad for rationalism, techno-optimism, and effective altruism. It’s an ad for contextless drivel about user experience design and design systems divorced from any particular product association.

When these products work it’s on a fundamentally mechanical level like server uptime, or UI responsiveness and not in any sense of satisfying a concrete task or purpose.

Sometimes they are described as “the Swiss Army Knife of X”. The ultimate trinket, the icon of product design lore despite being a tool-of-last-resort that sometimes serves—if there isn’t a butter knife handy—as an emergency screwdriver.

But product comparisons to the Swiss Army Knife are presented and interpreted as desirable selling points instead of a testament to the product’s novelty status.

Yet products without purpose still gather a following. People swear by them. Those people can only do that if they have found their own way to use the product. Their own purpose that the thing can satisfy. Like how a chimpanzee finds a stick useful for fishing ants out of the ground. The products are nothing but complicated sticks.

These products force the rot of their usefulness, or the “enshittification”, to live only inside the head of those that defined the terms for judging their usefulness, while the business behind the product just implied the terms. Finding the purpose is the fool’s errand.

Marketing that focuses on regular people’s specific purposes, or “use cases”, creates a criteria of success or failure for anyone that attempts to use it in that way. The industry trend is to avoid specific purposes and sell potential experiences instead.

User experience is now on par with brand experience. It doesn’t matter what the product does as long as it is lighter, smaller, easier to use, and industrial-designed by Teenage Engineering. Potential in place of purpose is what separates an iPad from an iPod, blockchains from databases, and generative AI from text editors. The more complex the product, the more potential it has to have potential.

Some people have paid a fortune for the Tesla Cybertruck, a barely capable electric “utility” vehicle, which they defend for its potential to iteratively improve and eventually match the abilities of any other vehicle in that category.

Potential distracts from purpose. In order to move units these businesses need to prevent the product from taking a solid form in our minds where we can begin to consider how it might be useful. It needs to remain as ethereal as possible. It has to remain in the realm of what the late Edward De Bono called “porridge words”.

There are two attributes that can make a product take that form in our minds:

  • A clear explanation of what it is for
  • Being a physical object which takes up space in the world

Perhaps counter-intuitively, a clear explanation seems to be a more potent form-giver than literally giving the product form.

For the iPad it’s as if the advertised thinness and lightness are necessary features to draw as little attention as possible to its existence. It’s small and light enough to make it clear that it’s your fault for thinking something so small and light could serve any purpose better than a tailor-made tool.

Browse the web sites of Notion, Figma, or Slack—check the celebratory discourse around OpenAI, Midjourney, Ethereum, or Bitcoin—look at the promotional material for the Apple Watch, the Vision Pro, the iPad—and you won’t find any assertions more specific than abstract claims of creativity, collaboration, health, wealth, or productivity.

They are tools for everything and nothing in particular. They are complicated means to undisclosed ends and they all dance around the same flaccid pitch of “here, see what you can do with this”


2 Comments

The Aura of Care

Published: June 27, 2023 - Updated: February 27, 2024

by Stephen Farrugia

In November 2022, Brian Chesky, CEO of Airbnb, began a tweet thread with “I’ve heard you loud and clear” in response to a customer backlash over the way they hid additional costs till the checkout page. “You feel like prices aren’t transparent…starting next month, you’ll be able to see the total price you’re paying up front” he said about a change that could be made urgently in a day, or carefully over a few.

When he said I’ve heard you loud and clear he was also telling his User Experience (UX) researchers and designers they were ignored, if they were heard at all. The dark pattern was no mistake. Intentionally designed to deceive and benefit from excited holiday planners and their potential to give in to the sunk cost fallacy. Instead of addressing the ridiculous additional fees the company chose to trick customers into paying them. That’s not empathy, at best it’s apathy, at worst it’s hate.

The decision to fix it only came after the balance of business value and public relations started to tip the wrong way. Chesky presented himself as a model CEO doing right by his customers as if he wasn’t responsible for wronging them in the first place. People bought it too. He demonstrated how bright a performative aura of care can shine to hide questions about the business activity or even questions about the business’s legitimacy to exist.

In April of 2022 Twitter added the option to write short descriptions of the images you attach to a tweet. Those descriptions help vision-impaired people that rely on synthesised voice software to read out the contents of a page. The thing about image descriptions is that the World Wide Web Consortium’s (W3C) standards for HTML—the document structure language of the web (and Twitter)—has required them since 1999. When Twitter went live, that requirement was already seven years old and twenty three years old by the time they obeyed it completely.

To praise Twitter for recognition of vision-impaired people is like praising a heavy drinker for taking a hip flask to their kid’s school play instead of skipping out to the pub. They did the bare minimum, reluctantly, despite having UX researchers and designers on deck. For this they deserve no more than a collective why the fuck did it take so long?

Goodness in a product’s design tends to make more sense as a convenient side-effect of a business case. For Twitter, crowd sourced image descriptions written for free can make a nice data set to sell for machine learning.

If we look at industry-wide examples we can see how intrinsic care replaced with business incentives leads to low quality black-and-white photocopies of the original ideas. Everything becomes optimised to meet business requirements and any surviving sense of care that remains is there by chance.

Since the beginning of the web, writing W3C compliant HTML has been highly regarded among developers. Standards compliant code makes the web accessible but the design philosophy of prioritising accessibility also led to the unique quality of HTML being forgiving if the standards are ignored.

Showing something in a web browser is more accessible than showing nothing, so a web page will still look right if the code is not perfect. In the early days, this meant that the quality of the HTML wasn’t factored into timelines and budgets because it was extra work that didn’t change how the site looked. If a site was built with standards compliant code it was because the developers wanted it that way and did it on their own time.

That all changed in the early 2000s when Search Engine Optimisation (SEO) arrived. The techniques for improving the visibility of a site in Google’s search results included rules for the structure of the HTML. These rules took some W3C standards and tied them to a tangible business case of heightened search visibility.

I remember the surreal experience of an SEO consultant presenting these rules to my web development team. We already knew everything they said because we understood web accessibility, but they were retelling these things as novel techniques for getting more sales leads from search.

Responsive Web Design (RWD)—a design philosophy for building sites that work for everyone regardless of the device they use or their connection speed—gained commercial adoption in a similar way, well after developers and designers had already seen its value as an empathetic design philosophy.

Google announced that “mobile-friendly” sites would be preferred in search results and some, not all, RWD techniques became convenient. Now responsiveness in commercial web apps focuses mostly on being visually accessible to devices used by a target demographic. Anything outside is considered an edge case and ignored, or again, supported by developers and designers taking initiative in their own time. That’s why some sites will crash the browser on your parents iPad, use up your mobile data before anything renders, and fail basic accessibility tests. Browsing the web has become a reason in itself to upgrade a device.

And yet… User Experience has become part of the everyday lexicon. Normal people who don’t make tech products say they prefer a product “for the UX”. Normal people who do make tech products say their product “has great UX”. It’s generally accepted as a measure of how easy something is to use, how little it gets in the way.

Like usability before, UX takes something that was a core concern of commercial product design—since companies sold products—and treats it like some novel modern add-on. But the real innovation is making it seem like the ease of use, the user experience, is the only thing that matters, because sometimes a product doesn’t offer much else.

Notion is a popular cloud-based product that is marketed with no purpose more specific than productivity or collaboration. It takes existing products like wikis, project management tools, and document editors and mashes them together into one window. Notion is the sum of products that were already legitimised as being useful by themselves.

Despite inheriting all of its usefulness from other useful things, Notion’s success is the result of good usability design that makes it easy to use those things in one place. For Notion the UX is the product. Productivity and collaboration might seem like vague purposes to an individual but to a tech company they are compelling, concrete, purposes. Businesses are sold corporate subscription plans for Notion and other products, like Slack and even Figma, which are imposed on staff as essential tools.

For employees these products are universal tools of nothing in particular. Each collaboration feature makes the anxiety of productivity ubiquitous. Little floating heads always watching over the document you’re working on, a perfect simulation of what we used to call micromanagement. They are virtual open-plan offices where everything you create becomes littered with comments and conversations you didn’t ask for.

The thing they all have in common is how strikingly easy they are to use. Part of which comes from very good usability design and part of which comes from the fact that you use them for a purpose you define yourself. When they say it’s for productivity instead of doing your taxes, they are benefiting from such an abstract criteria for failure that it doesn’t really mean anything. If you want to use it to do your taxes you can go ahead. But if it can’t help with some obscure tax calculation, you’re an edge case.

For a UX designer at Notion the concern is that it can be used easily, not how well it does a specific task for a specific expertise. And, look, I know how obvious and easy it is to dismiss this as how capitalism works. The problem being the aura of care surrounding UX pretends that capitalism can be coaxed into giving a shit. It chugs along as if UX designers and researchers are the ones who are going to cause a revolution of socialist CEOs who consider users beyond their money and their data.

But the inside secret of commercial UX is that the empathy is just a posture and the businesses benefit from the aura of care without having to entertain it. In non-profit, government, or volunteer-based open source projects, the posture can, and usually does, match the reality but in commercial tech it’s always contingent to the strength of a business case.

The Google UX design course that says it will help you “empathise with users” is attracting the best intentioned people and setting them up for a future of despair.

That’s why UX can help legitimise products that are intrinsically bad for people who use them. Tell someone that cigarettes are easy to use and they’ll ask about the reasons for using them, but tell them about the user experience of cigarettes and they’ll ask what makes the experience good.

Search Twitter for “FTX UX” and you’ll find no shortage of “it had a great UX” tweets published well after the fraud was exposed. It doesn’t matter which fraud or how obvious the scam was beforehand, the same search will yield the same results. The UX aura of care shines brighter.

The posture is strengthened by a UX community that seems open in its contradictions. The discipline is detached from the substance of the underlying products it is applied to so empathy for users is mixed in with discourse of psychological exploits for increasing user engagement.

There are Laws of UX that use psychology to design better products & services and at the top of most UX book lists you’ll find Nir Eyal’s Hooked to learn how to build habit forming products. Nir says he wants to see people hooked on products that promote healthy habits, but of course the ones getting rich from a product are going to believe their own bullshit when they say it’s harmless, healthy, or going to save the world.

Another seminal UX book is Steve Krug’s Don’t Make Me Think which has popularised the relentless removal of “friction” from user interfaces for over two decades. When you’re trading crypto with your life savings you do want to think about every thing you do despite how much the product will be designed to avoid it.

Marketing is about attracting new customers and retaining existing ones and commercial UX is concerned with removing the barriers that prevent these. UX is powerful because it doesn’t seem like marketing and the practitioners don’t see themselves as being in the marketing business.

Like the sales tough guy that demonstrates his versatility by saying he can sell you this fucking pen. UX doesn’t care what you hit with the baseball bat, it just makes sure you don’t get splinters from it. Web3, NFTs, and blockchain products need this product agnostic approach that keeps everything in the realm of experience because blurry, uncertain, or non-existent usefulness is a form of friction itself.

Consider FTX and all the other centralised crypto exchange, trading, and lending platforms that turned out to be massive scams. Centralised crypto products come from a community-wide UX need to obscure necessary complexity rather than create usefulness that is concrete enough to justify it. Complexity justified by usefulness is obvious in products like Blender where a terrifying interface hasn’t stopped it from becoming an industry standard. The evidence that gaining the expertise to use it will pay off is overwhelming.

There is no wonder that crypto, metaverse, and now AI pushers, are obsessed with UX. They talk about the user experience as a final barrier to adoption as if people are clambering behind a reinforced wall for a prize they can see and know they need. UX ignores questionable usefulness and the bright aura of care distracts from real questions of ethics and harm. It hides the real intentions of the business, not just behind a posture, but behind UX professionals who have a genuine sense of care. UX researchers and designers talk about empathy because they are empathetic people. In a commercial context there is tension between that empathy and viable business activity so the role becomes usability design by another name. UX seniors working outside commercial constraints don’t help the situation. They push the fight for the user rhetoric in Medium articles, tweets, and LinkedIn posts. They goad young UX starters to push for empathetic values without acknowledging how few contexts they are compatible with. For most, choosing where you work is a luxury. It’s going to be the commercial UX roles that pay the best every time. Designing socially beneficial products is something to strive for, but not something that should weigh on the shoulders of a junior UX designer while their manager is asking them to draw a dark pattern in Figma.

UX needs to make clear distinctions between commercial design work and design as a social good so the aura of care is not just an aura. Until that happens we’ll continue to see the worst companies hire the best people to help them make the worst things.


2 Comments

Reverse Vapourware

Published: June 2, 2023 - Updated: June 29, 2023

by Stephen Farrugia

Vapourware claims to solve a real problem in a way that seems impressive for its time. Your money is gone before the truth comes out: the purpose is real but the product is vapour. But vapourware doesn’t work with the subscription model of the Software-as-a-Service market. Our perception of software as a product has changed as well. Marketing for software emphasises features, properties and potential rather than any concrete purpose. The products are real but the purpose is vaporising.

The internet, the web, and email give us unprecedented access to other people, information, and entertainment. They serve their purpose. That’s why they’re irresistible. In contrast, algorithmic content feeds induce engagement to supplement their purpose. They are irresistible by design.

Criticism of harmful tech should always be aware of this distinction. When ignored the criticism is easy to dismiss as anti-progress. Paul Graham does exactly this in his 2010 essay The Acceleration of Addictiveness.

“It’s the same process that cures diseases: technological progress. And when progress concentrates something we don’t want to want—when it transforms opium into heroin—it seems bad. But it’s the same process at work.” - Paul Graham – The Acceleration of Addictiveness, July 2010

The conflation of addictive usefulness and designed addictiveness strengthen his dismissive stance. It’s also important to recognise his arguments hinge on technology and not products. AI doomerism fuelled by proprietary AI product releases does the same thing. Capitalist owned AI products are not getting an inch out of control if no one is paying for them. But call them technologies and the business accountability vaporises.

“You can’t put the genie back in the lamp” builds on the technology generalisation to create a sense of inevitability. It implies we are the ones who need to adjust and adapt, not the genie. It implies that these things will hurt us but only if we don’t learn how to protect ourselves from them. Debates over abstract concepts like addictive technology or existential AI risk distract us from foul play on a product level.

Gamification and manipulative engagement techniques allow products to thrive without a concrete purpose. Marketing for Notion uses broad purposes like “productivity” and “collaboration”. People love using Notion but they have to define their own purposes that the software can serve.

Marketing for reverse vapourware contains no trace of purpose at all. Web3 may be the best example of this. Web3 marketing, CEOs, and VCs rarely claim a concrete purpose. If they do it’s either dependent on some future event, described as unrealised potential, or doesn’t hold up to five minutes of critical thought.

We define our own purpose and for a product which can cause harm, finding a purpose is the fool’s errand. The businesses behind the products choose the purposes they endorse and distance themselves from the ones they oppose.

They can define the criteria for their own success and they free themselves from any criteria for failure.


0 Comments

Everything is Beautiful All of the Time

Published: January 25, 2023 - Updated: December 13, 2023
Under Construction New! Generate your own 90s page here! Under Construction