Warning: Parameter 2 to qtranxf_postsFilter() expected to be a reference, value given in /web/htdocs/www.aisdesign.org/home/aisd/wp-includes/class-wp-hook.php on line 286

SaggiID: 0802

Why it took so long: Developing the design mindset in the technology industries

In the 1970s and ’80s, computer interfaces seemed, even then, crude and unhelpful. “Design values” continued generally to be resisted by the technology industries long after becoming an accepted characteristic of other everyday artifacts. Only recently has the computing industry expressed enthusiasm for the design mindset. The present author, who experienced the transition from letterpress printing through offset litho and Letraset to interaction design, suggests eight reasons why it took so long and how differences in the cultures of design, education, engineering, and business all contributed to this.


In 2012 the US magazine Fast Company published an article, “Facebook Agrees: The Key to Its Future Success Is Design” (Boyd, 2012), describing the increase of Facebook’s design team from 20 people to 90 – 0.2% of a company of nearly 5,000 people! (Rao, 2013). In the same year, the magazine reported a venture capital fund that would only fund startups with at least one designer among its founders (Allen, 2012).

This was news indeed. In the 1970s and ’80s, ever more people had had to interact with computers. They did so through the computer’s interface, typically a sequence of information exchanges between user and machine manifest in a changing on-screen graphic. But the experience of this interaction, although a step up from interacting through a programming language or typing arbitrary commands, seemed, even then, crude, ungainly and unhelpful. Despite famous exceptions, notably Steve Jobs’ insistence that Apple hardware and screens look elegant, the computer industry generally resisted for decades what might be called “designerly values”: not only economy, simplicity, clarity but also aesthetic complexity, emotional resonance, cultural meaning; designing things that work well, but also work well for people. Or, as Vitruvius wrote, architects should give buildings firmitas, commoditas, venustas: they should be robust, appropriate for their function, and give delight.

This resistance is surprising, given that since the early twentieth century a powerful ethos of design has dominated in the education and professional practice of architects, for instance, and of graphic and product designers, and has long been the expected spirit of everyday products – buildings, vehicles, typography and so on. A part of this ethos derives from modernist-functionalist roots, such as the experiments of the Bauhaus, privileging logic, clarity and simplicity, and commitment to technological innovation. This approach seems particularly suited to designing the interaction between people and technological tools. As digital technology developed to encompass ever more of people’s everyday life, providing entertainment as well as information (environments as well as tools), however, simplicity and clarity were not enough: people appreciate satisfaction and delight as well as economy and efficiency.

However, it was not until a century after the birth of modern design culture and three decades after computing began to be ubiquitous, that in the early 2010s technology companies began to realise the importance of the emotional aspects of “design”. I myself lived through the transition from letterpress printing through offset litho and Letraset to interaction design, and experienced the chasm of understanding between the actors in the development of information technology –designers, educators, engineers, and business people. In this essay I suggest eight reasons why it took so long for an appreciation of the value of traditionally-trained designers to emerge.

1. Graphic designers were focused on print, consumed by the struggle to make computer technology achieve the quality they expected

Computer typesetting had been steadily developing in the 1970s and, although it had changed the work of the printer, it did not yet affect the work of graphic designers, who still had to:

– design the pages in pencil, mark up the text with fonts, sizes, weights
– send them to the typesetter and wait for the rough galleys
– check the galleys, have them corrected, and paste the text into place on paper grids with rubber cement or wax
– return the laid-out pages to the printer, who output the final high-quality text and pasted them in place on clean grids, photographed them, and made the plates for printing.

How primitive it sounds now! Because designers could not see the quality of the final product until it was printed, they had to work hard to imagine, drawing on their experience, what they wanted their design to look like in print and specify the font, spacing, grid, and layout that would produce the result they sought. They made sketches (“visuals”) of their designs, using pencil, ink, coloured pens or paper, which, according to the skill of the designer, gave a more or less accurate impression of the final artifact. But there was always a gap between the sketch and the final product. And they had to be right first time, because the cost of a change of mind – in money, time and reputation –was high. Rub-down lettering (Letraset) promised a more immediate impression of the final printed product but was fiendishly difficult – and slow – to apply evenly, and was only really feasible for headlines.

By the late 1970s it became possible for a computer typesetting machine to output a page as a whole, not just as a long column of text, eliminating the tedious job of pasting high-quality text in place in order to make the printing plates. But these computer-typesetting systems were large, complicated and very expensive.

Then, in 1984, Apple launched the Macintosh personal computer. Its black-on-white bitmapped screen meant that, instead of a single green-on-black mono-spaced font, it could show fonts of different sizes on screen, approximating how they would look printed (Wichary, 2005b). In the following year Apple launched the LaserWriter office printer, and Aldus the page-layout software PageMaker. The LaserWriter could be networked with up to 16 Macs, so even a small graphic design studio could have its own typesetting and proofing system and only needed a printing company for the final high-quality page output.

This radically changed the work balance between designer, typesetter and print works. Designers began not only to specify layout and fonts but also do their own typesetting and page-layout. It also changed the creative process: designers did not have to specify from their mind’s eye because it could be changed on the computer screen and tested on the office printer until it looked right. The process became more reactive, responding to what the computer showed, looking and choosing rather than proactively thinking and deciding.
Because quality was initially so difficult to obtain, graphic designers focused primarily on improving the typographic quality that could be achieved using this new technology. For many years they chafed under its initial limitations: tasks that seemed simple, like inserting a dropped capital or running type round an image, were torture. But once they started to feel in control of these new tools, and the quality improved, their struggle seemed finally over: they had tamed computers. Few were thinking about designing information that would in future be presented on screen, or how graphic design could improve the dismal quality of the software tools they were using.

2. Designing within the limitations of the screen seemed a pointless task

The graphic designer must do two things: first, understand the structure of the information or message to be communicated; second, design the form that will communicate the message most effectively and appropriately to its audience. Graphic designers were used to producing the most subtle of arrangements of composition on the page, delicate contrasts of tone and evenness of type, using these means to produce the right emotional tone – surprise or tranquility, seriousness or playfulness, for instance – and to lead the reader’s eye around the page, from the most to the least important information. But the first personal computers with visual displays in the late 1970s, such as the Apple II, Atari, and Commodore Pet, had cathode-ray displays, with 40 x 24 green characters on a black background. Few designers addressed the design of information on screen because the graphic variables were so pitifully few: position on screen, characters reversed or flashing, and, if you were lucky, capital or lowercase characters, all on a mono-spaced pixel grid. About as limited as a typewritten page which any competent secretary would design unaided. There seemed no point applying the skills of a graphic designer to screen design.

If typographic quality seemed unattainable, however, some people saw considerable scope for the visual structuring of information. One instance was AppleWorks (Apple II History, n.d.), an integrated combination of word processor, spreadsheet and database software for the Apple II. Despite its graphic inelegance, the logic and simplicity of its interface were exemplary, offering complete consistency of keyboard commands and feedback. You always knew where you were and what you were doing.

3. Computer companies focused on the problems of the technology not the people using it

We can think of the development of computer interfaces in several stages. The first stage required inputting a computer program to the computer’s memory to tell it what to do. Very early computers input the program using a bank of switches on the front of the computer, representing 0s and 1s. Later developments were different levels of programming “languages” which made this process less abstract and easier to read and understand. Though as early as 1954 IBM produced a cathode-ray display, the model 740, in the 1960s programs were still being input using punched cards or paper tape with coded holes and the results printed out on paper, often hours later – not exactly interactive. By the ’70s visual displays were more widespread and the programmer could type the program on a keyboard and immediately see the result on screen.

With the development of on-screen interfaces it became easier for non-programmers to use computers and to do things outside computing – writing text, say, playing a game, or making a 3D model of an object like a car. Here the interface was the means by which a person who had not written the program could understand what the computer could do and how to make it do it. In this stage the interface was still controlled through the alphanumeric keyboard. Using the first word processors on personal computers, for instance, one needed to type in a code to switch from typing mode to command mode, another code to make the type bold, another to switch it back to normal, and another to switch back into typing mode. It was a while until someone thought it might be a good idea to make the codes mnemonic: Command-B for bold, for instance.

As in the first stage of computing computer users were programmers, who knew how the program worked because they had written it, making things it easy to use was not a priority. This approach tended to persist even when non-programmers began to be the users. There was, however, a seminal exception: the Xerox Star office workstation. Begun in 1975 and based mainly in Xerox’s Palo Alto research centre (PARC), the Star project aimed to rethink the office for the digital age. Designed for office workers and executives, not programmers, it invented a completely new interaction paradigm: the direct manipulation on screen of graphic icons, in this case items familiar to office workers such as files and folders, which represented elements in the computer. One technology that made this possible was the bit-mapped screen: each pixel could be changed individually so icons could be drawn on screen and animated as they were moved with pointer and mouse. We now take for granted the WIMP interface (Windows, Icons, Menus, Pointer) but its invention was radical and remains fundamental.

The Star project assembled a large interdisciplinary team comprised of hard- and software engineers, who built the workstation and the underlying software, and, unusually, psychologists and designers, who worked on the graphical user interface (GUI). Before anything was built, many person-years of work went into its design: deciding the basic principles of how it should behave and how it should appear graphically so it could be easily understood, allowing people to focus on their work, not on how to use the system (Canfield Smith et al., 1982; DigiBarn Computer Museum, n.d.; Johnson et al., 1995).
The team struggled to explain the value of their work to the company, based a continent away to the east. The workstation was eventually launched in 1982 but Xerox, focused on competition in the copier market, where its patents had recently expired, did not capitalize on what the Star team had produced.

The Star workstation system was so expensive that only large companies could afford it. But in 1979 Steve Jobs visited PARC, saw the Star in action, and was immediately convinced that less specialised computer users, too, needed a graphical user interface and should not have to rely on typed commands and arbitrary codes. The principles of the Star GUI were immediately incorporated into the design of Apple’s Lisa, a personal computer for business eventually launched in January 1983 (Lineback, n.d.; Wichary, 2005a). Although the Lisa did not succeed commercially, it was a proof of concept: a GUI-based personal computer could be built. Jobs had meanwhile transferred his energies to the Macintosh, a much cheaper version of the Lisa. With its launch in 1984 we UK designers, for example, could buy a computer with a graphical user interface for £1,500 rather than £50,000.

In 1982 Byte magazine published a series of articles describing the Star’s development (Byte Magazine, 1982). For those few of us graphic designers already trying to bring design to software it was exciting to see how a black-on-white bitmapped screen could change the graphic potential of the design of interfaces, and the important role of graphic design in developing this entirely new way of operating computers.
The Star interface involved inventing a new way of representing the objects and actions of the computer, and communicating to its users how they could make it do what they wanted. It addressed traditional graphic design problems, such as legibility, structure, recognition and comprehensibility. But it also depended on a huge amount of new engineering – developing a new type of computer language, engineering the bitmapped display, new types of storage, programming applications for office work, and many more inventions – without which the interface would have been impossible. These new technologies made new kinds of design possible.

4. Technology use develops stage-by-stage

David Liddle led the Star design team and later founded Interval Research, a company focused on digital products for the consumer market. He identified three distinctive stages in the development of a new technology (Moggridge, 2007a). The first stage is that of the enthusiast, when “early adopters” are so thrilled about the technology itself, or what it can do for them, that they happily suffer all kinds of difficulty just to be able to use it. The second stage is that of the professional, when the technology has become more distributed among industries but remains a relatively rare skill. In this stage, difficulty of use may not be a disadvantage because the ability to overcome the difficulty is what the professional is selling. During this stage, too, employees must use a new technology, however user-unfriendly, because the decision to purchase it has typically been made by another department and on criteria other than ease or pleasure of use. Liddle’s third stage, finally, is that of the consumer, where people are not much interested in the technology; they just want it to do what it is meant to do with minimum fuss.

The 1960s and ’70s were the decades of the enthusiasts. Whether a huge, room-sized computer used by university researchers through the night in Cambridge, England, or the first home-brew computers assembled in Palo Alto, such systems were equally awkward to operate but offered results so excitingly unprecedented that this seemed irrelevant. The 1980s and early ’90s were the professional years: powerful workstations for typesetting, 3D architectural modeling, animation, industrial products, car design and so on. The late 1990s and the 2000s finally, saw the triumph of the consumer, epitomised by Microsoft’s slogan, “a computer on every desk and in every home”, and Apple’s “computer for the rest of us”.

The consumer stage in computing brings many disruptive changes. Computers become everyday commodities, like washing machines, chosen and bought by their users. Software producers and products multiply, as do computing-based services. Most significantly, “good design”, defined as an integration of aesthetic attractiveness and ease of use, becomes treated less as a superficial option, more as a crucial instrument to sell computer products and services to this hugely wider market. Beyond price and function, good design gives producers a competitive edge. Designers are needed to make the technology understandable and desirable – which is not an engineering problem.

5. Examples of good interaction design were few

In computer technology’s consumer stage, computers moved from being tools to carry out work tasks to environments in which people also spent their leisure time. People began to expect from them the intellectual satisfaction and aesthetic appropriateness taken for granted in other aspects of daily life. As good examples slowly emerged, companies and their customers saw what the technology could achieve and what interfaces could be like.
As Donald Schön pointed out (Schön, 1983), design moves forward through exemplars. The more exemplars there are, the richer design culture becomes. In the early days of personal computing an industry was being founded from scratch. Emphasis was on engineering, making the technology work. There was little understanding of how graphic design could make human–computer interaction more efficient, and little bandwidth, in reality or metaphorically, for cultural issues or what was seen as the “soft”, human side: usability, satisfaction, delight. So with few exemplars of good interaction design the culture of good interaction design developed slowly. The mid-century world of industrial products, however, had seen shining examples of commitment to design. At IBM in the 1950s and ’60s, for example, its head of corporate design Eliot Noyes hired some of the most talented American graphic and industrial designers and architects. “In a sense, a corporation should be like a good painting”, he wrote, “everything visible should contribute to the correct total statement; nothing visible should detract. Thus, a company’s buildings, offices, graphic design and so forth should all contribute to a total statement about the significance and direction of the company” (IBM, 2001). Similarly in Europe, the Italian entrepreneur Adriano Olivetti, from the late 1940s until his untimely death in 1960, hired a wide range of progressive artists, designers and architects to work on all aspects of the company’s production.

But this passion for excellent graphic and industrial design was slow to be transferred to the design of computers. One task, however, the outer casing of computers, was clearly a task for designers. In 1979 the London industrial designer Bill Moggridge started work on the first portable clam-shell computer, the Grid Compass, launched in 1982 (Moggridge, 2007b). The industrial design team, Shelley Evenson and John Rheinfrank at Richardson Smith in Columbus, Ohio, also worked on projects, design languages and strategies for companies such as Xerox. And in 1984 the German designer Hartmut Esslinger developed the elegantly modernist “Snow White” appearance of the Apple IIc casing – a dramatic departure from the characteristic beige of the Apple II and much contemporary American office equipment (Caula, 2012).

Although most design attention focused on the outer appearance of hardware, as these examples show, a change was coming. Moggridge (2007b) later wrote that he was very pleased with the industrial design of the Compass, but when he saw the final product he realized that users would hardly look at his careful design of its exterior but instead spend hours in the virtual world of the interface on its screen. It was then he knew that his firm ID2 needed to get into what he and his colleague Bill Verplank later christened “interaction design”.

In 1982 the only notably good example of this was the Star interface. A year later, as previously mentioned, another convincing example was released, Apple’s Lisa interface. Its successor, for the Mac, however, had very different qualities. In Moggridge’s (2007c) encyclopaedic oral history of interaction design, Bill Atkinson, who designed the Lisa’s icons and worked on the Mac interface, recalls:

You need a way to show there is something in the trash. […] The very first version of the trashcan I wrote had little flies buzzing around it, but it got sanitized out. […] I think some of the work in designing the Lisa user interface was a little bit hampered by who we thought it was for; we thought we were building for an office worker, and we wanted to be cautious not to offend. When I was working on the Mac, we thought the person we were building it for was a fourteen-year-old boy, so that gave us more freedom to come more from the heart, and a little less from fear of offending. […] Those of us on the Macintosh team were really excited about what we were doing. The result was that people saw a Mac and fell in love with it. Only secondarily did they think, ‘How can I justify buying this thing?’ There was an emotional connection to the Mac that I think came from the heart and soul of the design team.

A year before the Mac’s launch the team had hired Susan Kare, who, she later said, fell into the job by happy accident (Layers Design Conference, 2015). With a background in art history and some experience with Letraset, she worked on the fonts, giving consistency to the graphic style of the interface. More importantly perhaps, she gave a distinctive expressivity to the icons, making them playful as well as efficient (Wichary, 2005b). All form, industrial as well as graphic, implicitly communicates an emotional tone, intentionally or not.

6. Design education for the new digital world was initially rare

A few interaction design programmes based on design principles, rather than on engineering or psychology, were begun in the 1970s. But the student numbers were small, and subsequent programmes emerged only slowly.
The first design-based programme was in 1975: the Visible Language Workshop (VLW) at MIT, led by Muriel Cooper, formerly art director at the MIT Press. In 1985 the VLW became part of MIT’s Media Lab. Cooper and her students developed new ways of presenting information on screen that were influential, particularly because of the VLW alumni who ultimately moved to Silicon Valley.

The Interactive Telecommunications Program (ITP) at NYU Tisch School of the Arts was founded in 1979 and directed from 1983 until her death in 2013 by Red Burns, a documentary filmmaker particularly interested in the social and community uses of film. ITP grew out of the informal programme at Tisch that she co-founded, the Alternate Media Center, where experiments in new technologies such as two-way cable TV and Teletext investigated how these might be used for services for seniors or developmentally disabled adults. ITP was broadly-focused, exploring how new technology might be exploited for practical and artistic ends.
A third programme was the one I started started in London in the early ’80s. Inspired by a bumper edition of U&lc (the type magazine of the foundry ITC) that surveyed all the ways computers were being used in typography and graphic design, I bought a computer in 1981 and programmed a desktop tool for page layout: back-of-envelope sketches linked to space calculations. Building this I became more interested in how basic graphic design knowledge and skill could make a program easier to use. So in 1982, at London’s Saint Martin’s School of Art (later merged with the Central School to become Central Saint Martins) I started a part-time post-graduate Diploma in Computers and Graphic Design. This aimed to teach practising designers about computation so that they would understand how software was built and, ideally, use their existing design expertise to suggest better tools for the future. A designer from Apple’s Multimedia Lab, Kristee Kreitman, happened to see a display of inkjet illustrations in the window of St Martin’s in Long Acre and went in to find out more. This triggered a long and fruitful collaboration between art and design schools in London and Silicon Valley.

In 1990 I moved to the Royal College of Art (RCA), the UK’s graduate school of art and design, where I was given responsibility for a small industrial design programme, Computer Related Design (CRD), whose students were starting to design computer interfaces. I developed this programme of teaching and research with the sponsorship of Interval Research Corporation and Apple. Because there were no interaction design jobs in England, almost all the early alumni started their careers in the USA, further strengthening London–Silicon Valley bonds.

Each of these programmes had a distinctive character shaped by the context and the background of the founders: broadcast media at ITP, graphic design at St Martin’s, information design and computer science at VLW, and graphic and industrial design at the RCA.

From the end of the 1980s, the most important supporter of design education in Silicon Valley was Joy Mountford, an English psychologist who had worked for Honeywell on aircraft cockpit design. Between 1986 and 1994 she directed Apple’s Human Interface Group (HIG), an interdisciplinary research team of around 30 engineers, designers, sound experts and psychologists, all working on interfaces for the future. To encourage universities to develop the interdisciplinary programmes which she thought vital to successful interface design, to provide a talent pool for her group, and to demonstrate to Apple what young people might come up with, she instituted the Apple University Competition. Each year she chose six universities from around the world with design programmes in some way related to interaction design, then set them a challenge.

The first challenge, in 1990, was to invent and design possible scalable interfaces for devices of different sizes (which did not yet exist). Apple gave each programme around $20,000 to buy equipment and paid for the most promising student team and their professor in each university to fly to California and spend three days at Apple, polishing their presentation and presenting to Apple people. They gained inspiration from the work they saw at Apple and from meeting students and professors from the other universities.
The grant helped these programmes gain credibility in their institutions. In some craft-based design schools, for instance, computing was seen as an anti-creative threat. Some teachers, however, hoped that exposing young creatives to computing could enrich the new technology, allowing it take its place with earlier technologies like print and construction to enhance everyday life and culture. Though the number of participants was small, the Apple competition began to assemble a network of like-thinking students and professors and seeded the idea that design values and practice had a role in creating digital artifacts.
By the end of the 1990s a few more design-based interaction design programmes had begun – in the USA, those at Pasadena’s ArtCenter College of Design, California Institute of the Arts (CalArts), and Carnegie Mellon. (Surprisingly, San Francisco’s California College of the Arts (CCA) only started a dedicated interaction design programme in 2010.) Continental Europe responded slightly more rapidly: programmes had begun in Malmö, Utrecht and several in Germany, including New Media Art and Design at the Berlin University of the Arts (UdK).
A significant new USA-Europe connection arrived in Italy in 2000. Roberto Colaninno, CEO of Telecom Italia, had asked an engineer and senator, Franco Debenedetti, to develop a plan for an institute of higher education in Ivrea, Olivetti’s hometown near Turin. The first idea was to develop a business school focused on the new generation of telecommunication services that Telecom was starting to provide. However as there were already strong business schools in Italy, Debenedetti thought it more useful to develop something that didn’t already exist in Italy. In this he was encouraged by Barbara Ghella, who ran a Milan software company and had found it difficult to recruit designers. Mindful of the great Olivetti tradition of design, Debenedetti and Ghella went fact-finding to Palo Alto, which at the time was a strong centre of design for digital technology. They talked to Bill Verplank and to IDEO, a merger of Bill Moggridge’s ID2 with two other Silicon Valley product design studios. Following Moggridge’s suggestion that they should also talk to my RCA department in London, I eventually became the founding director of Interaction Design Institute Ivrea (IDII), a school and research institute which attracted students and faculty from around the world. Generously funded, IDII was able to host many international visitors and generated a wide range of projects, the most famous of which was undoubtedly Arduino, the low-cost microcontroller board which allows non-engineers to build computation-controlled physical devices, benefitting designers and makers worldwide. An equally important IDII product was a network of graduates able to marry technology and culture.

That said, the number of interaction design alumni remains severely inadequate. Beginning in 2013, IBM, to transform the role of design in the company, hired 750 formally-trained designers over three years, and in 2015 committed to double this number (Lohr, 2015). In 2016 the designer Bob Baxley (2016) took the US Bureau of Labor’s estimate of software engineers (developers and programmers) currently in the USA, assumed a ratio of one designer (“the bare minimum needed”) to every 10 engineers, and calculated that America needs 159,100 interaction designers – a professional population which the current graduation rate cannot possibly achieve. Worldwide, of course, the need is far greater.

7. Scientific and academic paradigms were too rigid

Interaction design as a discipline evolved partly from human–computer interaction (HCI), a field of study in which psychologists struggled to persuade computer engineers that basic ergonomic knowledge about user behaviour, honed in the design of World War II aircraft cockpits, could be useful in the design of computer systems.

That HCI remained dominated intellectually and professionally by the values and procedures of the “hard” sciences, is evident in the 1990 manifesto of Mitchell Kapor, designer of the Lotus 123 spreadsheet program (Kapor, 1991; Winograd, 1996, pp. 1-6). The following extract defends designers from the overbearance of engineers:

When you go to design a house you talk to an architect first, not an engineer. Why is this? Because the criteria for what makes a good building fall substantially outside the domain of what engineering deals with. […] Design disciplines are concerned with making artifacts for human use. Architects work in the medium of buildings, graphic designers work in paper and other print media, industrial designers on mass-produced manufactured goods, and software designers on software. The software designer should be the person with overall responsibility for the conception and realization of the program. […] One of the main reasons most computer software is so abysmal is that it’s not designed at all, but merely engineered. Another reason is that implementors often place more emphasis on a program’s internal construction than on its external design, despite the fact that as much as 75 percent of the code in a modern program deals with the interface to the user.

The need for Kapor’s manifesto became clear to me in 1990, when I attended HCI’s main conference (confusingly called CHI). Announcing that I taught at the Royal College of Art, I was often asked, “Why are you here?” Partly to answer this, over the next ten years students of my newly formed CRD programme made guerilla presentations of their projects at CHI, hunting down a projector and an empty room, and fly-posting the conference centre, to attract a curious and increasingly enthusiastic audience.

Our work did not sit easily with CHI’s engineering-dominated ethos nor fit the traditional mechanism of academic papers. We aimed instead to provoke, amuse, inspire – and demonstrate that there could be more to the design of computer artifacts than pure function. It was uphill work. But over the 1990s the twentieth-century design ethos pioneered at the Bauhaus seeped into Silicon Valley. RCA students took internships at Apple, the Apple/IBM startup Taligent, and IDEO. These collaborations brought understanding of the digital design world back to London, and a European design impetus to the Valley.

Other initiatives helped. In 1992, inspired by Kapor’s manifesto and supported by David Liddle, Terry Winograd, a Stanford professor of computer science, invited many different kinds of designers to a two-day workshop ”Bringing Design to Software” to describe how they understood and practised their professions (Winograd, 1996).

Also in 1992, Microsoft’s co-founder Paul Allen, with Liddle, established Interval Research Corporation in Palo Alto to develop technologies and companies to bring digital technology to the consumer market. Interval’s sponsorship of the CRD Research Studio came from Liddle’s conviction that design was important for developing consumer products, and Interval’s need to access design insight and expertise. Interval people spent time in London, and the CRD group – including Durell Bishop, Anthony Dunne, William Gaver, and Fiona Raby – spent time at Interval, where some were later employed.

8. City and Valley had different values

Perhaps the most fundamental reason why it took so long was a cultural gap. Few of us participants understood the big differences between what might be called “artist-designers” and “engineer-designers”, and perhaps between Silicon Valley, emerging before our eyes from orange orchards and wilderness, and European culture, shaped by its ancient metropolises.
What struck me most strongly, arriving for my first residence at Apple in the early 1990s, was the Valley’s energy and optimism, the belief that so much waited to be invented and engineered, and that a couple of clever guys like Hewlett and Packard or Jobs and Wozniak could start something in their garage and make it big.

In this world engineering was king: a culture of precision, measurement and certainty. My impression was that design was generally seen as troublingly subjective and unmeasurable, providing merely a pretty exterior to what had real value, the engineered artifact. It was regarded as at best optional, at worst somewhat deceitful – a cosmetic.

At their extremes, the mindsets and methods characteristic of design and engineering are very different. Engineers use reason and logic; designers, craft and tacit knowledge, a process they cannot easily explain. Engineers converge towards a solution; designers maddeningly diverge as long as possible to generate many potential solutions. Engineers privilege the analytic; designers the synthetic. Engineers focus on the functional aspects of the artifact; designers balance its function with its role in culture and society. Engineers focus on what can be built; designers on what would be also useful, usable and satisfying.

One barometer of changing attitudes to design is the evolving viewpoint of the influential series of books by Donald Norman, a psychology professor at the University of California, San Diego. His first book about interface design, The psychology of everyday things (1988), described how many things in our world are poorly designed, and how difficult it is to see how to use them: doors you don’t know whether to pull or push, hotel showers which scald you because you can’t figure them out, cookers that don’t make clear which knob controls which burner. Norman’s conclusion in this book was that designers are just incompetent.

In the early 1990s he became vice-president of Apple Research Labs, concentrating on improving Apple’s interface design. Working with Joy Mountford’s team of designers, he saw designers in action and began to appreciate the difficulty of reaching good solutions to what were often very complex problems. Norman’s book Emotional Design (2004) therefore offers a more rounded and sympathetic critique of design: that design concerns not only the resolution of practical requirements of manufacture and use but also the emotional effect of any design solution or another – something which cannot be engineered with certainty. Emotional Design describes, in a way perhaps more convincing to the engineering mindset, the cognitive science that lies behind emotional response.
Designers educated in an older, more metropolitan tradition, were used to clients who understood design’s value and did not expect the process to be fully explicable or defined by rules. They were not prepared for a world that not only did not understand their values and strategies did but thought them irrelevant, arty and rather flaky. It was not until Apple became for a while the most valuable company in the world, selling products considerably more expensive than their competitors, and economic research showed that stocks of companies with a strong commitment to design performed better, and had weathered the 2008 crash robustly (Design Council, 2013; Rae, 2013), did technology companies start to think that to maintain a competitive edge the design mindset might be indispensable.

9. Looking forward

I consider 1981, when I bought my first computer and started to program a page-layout tool, the start of my transition from graphic designer to interaction designer. More than ten years later, in 1993 at the ICOGRADA conference in Glasgow, I gave a talk entitled “Humanising Technology: Not much progress so far” (Crampton Smith, 1994) which described many of the bad interaction designs in common use. They were bad because of elementary gaffes of graphic communication design.

In the two decades since then, the design of interaction with digital systems has come a long way. We do not so often encounter really terrible interactions with digital devices and systems. However, the digital world has much changed: whereas in the 1980s and ’90s the issue was to make computer tools useful and a pleasure to use, today we are designing the virtual environments in which people hang out together, carry on business, buy things, petition politicians. This is a very different design space. Maybe we can get by in a world where our tools are not very satisfying to use, but to spend our lives in badly designed virtual environments is a real impoverishment. Where is the grace, complexity, wit, surprise, we enjoy in other parts of our everyday culture – buildings, posters, books, consumer products, clothes, advertisements? I finished the talk by saying:

Designers can’t sit on the sidelines. They need to become involved in the design of interactive products – information and entertainment systems, electronic products, responsive environments. Our world is being transformed by these technologies and designers need to be there, making things beautiful as well as practical, expressive as well as functional. Whether we like it or not, culture in the next century will be conditioned by electronics and telecommunications. Artists and designers need to be players, not spectators.

The digital world provides robustness – most digital products today work well enough. It is beginning to provide products that fit what people need or want to do. But, with a few wonderful exceptions, it still does not offer much delight. More than ever artists and designers need to be players.


References

Allen, Enrique. (November 2012). Silicon Valley’s new secret weapon: Designers who found startups. In Fast Company. Retrieved from http://www.fastcodesign.com/1665795/silicon-valleys-new-secret-weapon-designers-who-found-startups.

Apple II History (n.d.). Retrieved from http://apple2history.org/history/ah19/.

Baxley, Bob. (2016). The best job in the world. Retrieved from http://conferences.oreilly.com/design/ux-interaction-iot-us/public/schedule/proceedings.

Boyd, E. B. (April 2012). Facebook agrees: The key to its future success is design. In Fast Company. Retrieved from http://www.fastcodesign.com/1669366/facebook-agrees-the-secret-to-its-future-success-is-design.

Byte Magazine. (1982). Issue 4/1982, 242. Retrieved from
https://archive.org/details/BYTE_Vol_07-04_1982-04_Human_Factors_Engineering.

Canfield Smith, David, Harslem, Eric, Irby, Charles, Kimball, Ralph, & Verplank, Bill. (1982). Designing the Star Interface. In Byte, issue 4/1982, 242-282. Retrieved from http://www.guidebookgallery.org/articles/designingthestaruserinterface.

Crampton Smith, Gillian. (1994). Humanising technology: Not much progress so far. Design Renaissance, ICOGRADA/ICSID international conference proceedings. Brighton: Open Eye Press.

Caula, Rodrigo. (2012). Hartmut Esslinger’s early Apple computer and tablet designs. Retrieved from http://www.designboom.com/technology/hartmut-esslingers-early-apple-computer-and-tablet-designs/.

Design Council. (2013). Design delivers for business. Retrieved from
http://www.designcouncil.org.uk/resources/report/design-delivers-business.

DigiBarn Computer Museum. (n.d.). Xerox Star 8010 screenshots. Retrieved from http://www.digibarn.com/collections/screenshots/xerox-star-8010/index.html.

IBM. (2001). Good design is good business. Retrieved from http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/gooddesign/transform/.

Johnson, Jeff, et al. (1995). The Xerox Star: A retrospective. In Readings in human-computer interaction: Toward the year 2000. San Francisco: Morgan Kaufmann. Retrieved from http://www.digibarn.com/friends/curbow/star/retrospect/.

Kapor, Mitchell. (1991). A Software Design Manifesto. In Dr Dobb’s Journal, 16/1, 62-67.

Layers Design Conference. (2015). John Gruber’s interview of Susan Kare. Retrieved from https://vimeo.com/151277875.

Lineback, Nathan. (n.d.). Apple Lisa screenshots. Retrieved from http://toastytech.com/guis/lisaos1LisaTour.html.

Lohr, Steve. (2015, November 14). IBM’s design-centered strategy to set free the squares. New York Times. Retrieved from http://www.nytimes.com/2015/11/15/business/ibms-design-centered-strategy-to-set-free-the-squares.html?_r=1.

Moggridge, Bill. (2007a). Interview with David Liddle. Designing interactions (pp. 239-251). Cambridge, MA: MIT Press.

Moggridge, Bill. (2007b). Introduction. Designing interactions (pp. 9-14). Cambridge, MA: MIT Press.

Moggridge, Bill. (2007c). Interview with Bill Atkinson. Designing interactions (p. 101). Cambridge, MA: MIT Press.

Norman, Donald A. (1988). The psychology of everyday things. New York: Basic Books. Reissued (2002) as The design of everyday things.

Norman, Donald A. (2004). Emotional design. New York: Basic Books.

Rae, Jeneanne. (2013). What is the real value of design?. DMI Review, Winter. Retrieved from https://dmi.site-ym.com/store/ViewProduct.aspx?id=2481768.

Rao, Leena. (January 2013). Facebook will grow headcount quickly in 2013. Retrieved from http://techcrunch.com/2013/01/30/zuck-facebook-will-grow-headcount-quickly-in-2013-to-develop-future-money-making-products.

Schön, Donald. (1983) The reflective practitioner: How professionals think in action. London: Temple Smith.

Wichary, Marcin. (2005a). GUIdebook Gallery: Apple Lisa. Retrieved from http://www.guidebookgallery.org/articles/inventingthelisauserinterface.

Wichary, Marcin. (2005b). GUIdebook Gallery: Apple Mac System 1.1 screenshots. Retrieved from http://www.guidebookgallery.org/screenshots/macos11.

Winograd, Terry. (1996). Bringing design to software. New York, NY: ACM Press and Addison-Wesley. Retrieved from http://hci.stanford.edu/publications/bds/.

Questo articolo è stato pubblicato in AIS/Design Storia e Ricerche, numero 8 ottobre 2016

Gillian Crampton Smith

Gillian Crampton Smith’s passion for typography started when as a schoolgirl she learnt to set type by hand and print with an Albion press. At university she designed and did the artwork for several magazines, Letrasetting headlines and pasting down galleys with rubber cement. She went on to freelance as a graphic designer, spending several years with Times Newspapers and Sight and Sound, the film magazine. The problems posed by photosetting that publication pushed her in 1981 to write a program to do page layout sketches on screen – early desktop publishing. In the 1980s she taught graphic design and typography at London’s St Martin’s School of Art, where she started the post-graduate degree in Computers and Graphic Design. In the 1990s, at the Royal College of Art, she was professor and director of the MA in Computer Related Design and at its associated research group, working in collaboration with UK and US technology companies. In 2001 she moved to Italy as founding director of Interaction Design Institute Ivrea and then, with Philip Tabor, started the Interaction Design track at Iuav University of Venice, where she taught until 2015. She and Tabor are now planning a new Masters in Interaction Design at H-Campus, the new education institution near Venice. She is honorary professor at the Potsdam University of Applied Sciences and in 2014 was awarded the ACM SIGCHI Lifetime Achievement in Practice Award.

Comments are closed.