Rock n’ roll, sports and philanthropy
By PHUONG LE
Tuesday, October 16
SEATTLE (AP) — Personal computers, conservation, pro football, rock n’ roll and rocket ships: Paul G. Allen couldn’t have asked for a better way to spend, invest and donate the billions he reaped from co-founding Microsoft with childhood friend Bill Gates.
Allen used the fortune he made from Microsoft — whose Windows operating system is found on most of the world’s desktop computers — to invest in other ambitions, from tackling climate change and advancing brain research to finding innovative solutions to solve some of the world’s biggest challenges.
“If it has the potential to do good, then we should do it,” Gates quoted his friend as saying.
Allen died Monday in Seattle from complications of non-Hodgkin’s lymphoma, according to his company Vulcan Inc. He was 65. Just two weeks ago, Allen, who owned the NFL’s Seattle Seahawks and the NBA’s Portland Trail Blazers, had announced that the same cancer he had in 2009 had returned.
Gates, who met Allen at a private school in Seattle, said he was heartbroken to have lost one of his “oldest and dearest friends.”
“Personal computing would not have existed without him,” Gates said in a statement, adding that Allen’s “second act” as a philanthropist was “focused on improving people’s lives and strengthening communities in Seattle and around the world.”
Over his lifetime, Allen gave more than $2 billion to efforts aimed at improving education, science, technology, conservation and communities.
“Those fortunate to achieve great wealth should put it to work for the good of humanity,” Allen wrote several years ago, when he announced that he was giving the bulk of his fortune to charity. He said that pledge “reminds us all that our net worth is ultimately defined not by dollars but rather by how well we serve others.”
Allen, who played guitar, built a gleaming pop culture museum in his hometown to showcase his love of rock n’ roll, and funded underwater expeditions that made important shipwreck discoveries, including a U.S. aircraft carrier lost during World War II.
Yet in a sense, Allen also lived up to the moniker once bestowed on him by Wired Magazine: “The Accidental Zillionaire .” He was a programmer who coined Microsoft’s name and made important contributions to its early success, yet was overshadowed by his partner’s acerbic intellect and cutthroat business sense.
At the company’s founding, for instance, Allen let Gates talk him into taking the short end of a 60-40 ownership split. A few years later, he settled for an even smaller share, 36 percent, at Gates’ insistence. Reflecting on that moment In his memoir, Allen concluded that he might have haggled more, but realized that “my heart wasn’t in it. So I agreed.”
Allen was born in Seattle. After graduating from the city’s private Lakeside School, where he met Gates, Allen spent two years at Washington State University. The two friends both dropped out of college to pursue the future they envisioned: A world with a computer in every home.
“There would be no Microsoft as we know it without Paul Allen,” said longtime technology analyst Rob Enderle, who also consulted for Allen.
Allen and Gates founded Microsoft in Albuquerque, New Mexico, and their first product was a computer language for the Altair hobby-kit personal computer, giving hobbyists a basic way to program and operate the machine.
After Gates and Allen found some success selling their programming language, MS-Basic, the Seattle natives moved their business in 1979 to Bellevue, Washington, not far from its eventual home in Redmond.
Microsoft’s big break came in 1980, when IBM Corp. decided to move into personal computers and asked Microsoft to provide the operating system.
Gates and Allen agreed, even though they didn’t have one to offer. To meet IBM’s needs, they spent $50,000 to buy an operating system called QDOS from another startup in Seattle — without, of course, letting on that they had IBM lined up as a customer. Eventually, the product refined by Microsoft became the core of IBM PCs and their clones, catapulting Microsoft into its dominant position in the PC industry.
The first versions of two classic Microsoft products, Microsoft Word and the Windows operating system, were released in 1983. By 1991, Microsoft’s operating systems were used by 93 percent of the world’s personal computers.
Allen served as Microsoft’s executive vice president of research and new product development until 1983, when he resigned after being diagnosed with Hodgkin’s disease.
But Allen left Microsoft knowing he and Gates would be forever linked in the history of technology.
“We were extraordinary partners,” Allen wrote. “Despite our differences, few co-founders had shared such a unified vision — maybe Hewlett and Packard and Google’s Sergey Brin and Larry Page, but it was a short list.”
After leaving Microsoft, Allen would remain interested in technology, especially the field of artificial intelligence, which recalled first piquing his interest while he was still a teenager after reading “I, Robot,” a science fiction book by Isaac Asimov.
“From my youth, I’d never stopped thinking in the future tense,” Allen wrote in his 2011 memoir, “Idea Man.”
With his sister Jody Allen in 1986, Allen founded Vulcan, which oversees his business and philanthropic efforts. He founded the Allen Institute for Brain Science and the aerospace firm Stratolaunch, which has built a colossal airplane designed to launch satellites into orbit. He has also backed research into nuclear-fusion power and scores of technology startups.
Allen also funded maverick aerospace designer Burt Rutan’s SpaceShipOne, which in 2004 became the first privately developed manned spacecraft to reach space.
The SpaceShipOne technology was licensed by Sir Richard Branson for Virgin Galactic, which is testing a successor design to carry tourists on brief hops into lower regions of space.
Yet Allen never came close to replicating Microsoft’s success. What he always seemed to lack, Enderle said, was another Bill Gates to help fulfill his visions.
“He was a decent engineer who got the timing on an idea right once in his life, and it was a big one,” Enderle said.
When Allen released his memoir, he allowed “60 Minutes” inside his home on Lake Washington, across the water from Seattle, revealing collections that ranged from the guitar Jimi Hendrix played at Woodstock to vintage war planes and a 300-foot yacht with its own submarine.
“My brother was a remarkable individual on every level,” his sister Jody Allen said in a statement. “Paul’s family and friends were blessed to experience his wit, warmth, his generosity and deep concern,” she added.
Paul Allen’s influence is firmly imprinted on the cultural landscape of Seattle and the Pacific Northwest, from the bright metallic Museum of Pop Culture designed by architect Frank Gehry to the computer science center at the University of Washington that bears his name.
In 1988 at 35, he bought the Portland Trail Blazers professional basketball team. He told The Associated Press that “for a true fan of the game, this is a dream come true.”
He also was a part owner of the Seattle Sounders FC, a major league soccer team, and bought the Seattle Seahawks. Allen could sometimes be seen at games or chatting in the locker room with players.
Associated Press writers Michael Liedtke in San Francisco and Lisa Baumann in Seattle contributed to this report.
When the line between machine and artist becomes blurred
October 16, 2018
Professor of Computer Vision, Rutgers University
Ahmed Elgammal does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
With AI becoming incorporated into more aspects of our daily lives, from writing to driving, it’s only natural that artists would also start to experiment with artificial intelligence.
In fact, Christie’s will be selling its first piece of AI art later this month – a blurred face titled “Portrait of Edmond Belamy.”
The piece being sold at Christie’s is part of a new wave of AI art created via machine learning. Paris-based artists Hugo Caselles-Dupré, Pierre Fautrel and Gauthier Vernier fed thousands of portraits into an algorithm, “teaching” it the aesthetics of past examples of portraiture. The algorithm then created “Portrait of Edmond Belamy.”
The painting is “not the product of a human mind,” Christie’s noted in its preview. “It was created by artificial intelligence, an algorithm defined by [an] algebraic formula.”
If artificial intelligence is used to create images, can the final product really be thought of as art? Should there be a threshold of influence over the final product that an artist needs to wield?
As the director of the Art & AI lab at Rutgers University, I’ve been wrestling with these questions – specifically, the point at which the artist should cede credit to the machine.
The machines enroll in art class
Over the last 50 years, several artists have written computer programs to generate art – what I call “algorithmic art.” It requires the artist to write detailed code with an actual visual outcome in mind.
One the earliest practitioners of this form is Harold Cohen, who wrote the program AARON to produce drawings that followed a set of rules Cohen had created.
But the AI art that has emerged over the past couple of years incorporates machine learning technology.
Artists create algorithms not to follow a set of rules, but to “learn” a specific aesthetic by analyzing thousands of images. The algorithm then tries to generate new images in adherence to the aesthetics it has learned.
To begin, the artist chooses a collection of images to feed the algorithm, a step I call “pre-curation.”
For the purpose of this example, let’s say the artist chooses traditional portraits from the past 500 years.
Most of the AI artworks that have emerged over the past few years have used a class of algorithms called “generative adversarial networks.” First introduced by computer scientist Ian Goodfellow in 2014, these algorithms are called “adversarial” because there are two sides to them: One generates random images; the other has been taught, via the input, how to judge these images and deem which best align with the input.
So the portraits from the past 500 years are fed into a generative AI algorithm that tries to imitate these inputs. The algorithms then come back with a range of output images, and the artist must sift through them and select those he or she wishes to use, a step I call “post-curation.”
So there is an element of creativity: The artist is very involved in pre- and post-curation. The artist might also tweak the algorithm as needed to generate the desired outputs.
Serendipity or malfunction?
The generative algorithm can produce images that surprise even the artist presiding over the process.
For example, a generative adversarial network being fed portraits could end up producing a series of deformed faces.
What should we make of this?
Psychologist Daniel E. Berlyne has studied the psychology of aesthetics for several decades. He found that novelty, surprise, complexity, ambiguity and eccentricity tend to be the most powerful stimuli in works of art.
The generated portraits from the generative adversarial network – with all of the deformed faces – are certainly novel, surprising and bizarre.
They also evoke British figurative painter Francis Bacon’s famous deformed portraits, such as “Three Studies for a Portrait of Henrietta Moraes.”
But there’s something missing in the deformed, machine-made faces: intent.
While it was Bacon’s intent to make his faces deformed, the deformed faces we see in the example of AI art aren’t necessarily the goal of the artist nor the machine. What we are looking at are instances in which the machine has failed to properly imitate a human face, and has instead spit out some surprising deformities.
Yet this is exactly the sort of image that Christie’s is auctioning.
A form of conceptual art
Does this outcome really indicate a lack of intent?
I would argue that the intent lies in the process, even if it doesn’t appear in the final image.
For example, to create “The Fall of the House of Usher,” artist Anna Ridler took stills from a 1929 film version of the Edgar Allen Poe short story “The Fall of the House of Usher.” She made ink drawings from the still frames and fed them into a generative model, which produced a series of new images that she then arranged into a short film.
Another example is Mario Klingemann’s “The Butcher’s Son,” a nude portrait that was generated by feeding the algorithm images of stick figures and images of pornography.
I use these two examples to show how artists can really play with these AI tools in any number of ways. While the final images might have surprised the artists, they didn’t come out of nowhere: There was a process behind them, and there was certainly an element of intent.
Nonetheless, many are skeptical of AI art. Pulitzer Prize-winning art critic Jerry Saltz has said he finds the art produced by AI artist boring and dull, including “The Butcher’s Son.”
Perhaps they’re correct in some cases. In the deformed portraits, for example, you could argue that the resulting images aren’t all that interesting: They’re really just imitations – with a twist – of pre-curated inputs.
But it’s not just about the final image. It’s a about the creative process – one that involves an artist and a machine collaborating to explore new visual forms in revolutionary ways.
For this reason, I have no doubt that this is conceptual art, a form that dates back to the 1960s, in which the idea behind the work and the process is more important than the outcome.
As for “The Butcher’s Son,” one of the pieces Saltz derided as boring?
It recently won the Lumen Prize, a prize dedicated for art created with technology.
As much as some critics might decry the trend, it seems that AI art is here to stay.
Comment: Chris Crawford
This entire discussion hinges completely upon one’s definition of art, and it seems that every person on this planet has their own definition of the word. But subjectivity is central to art, so I’ll plunge ahead with some opinions of my own.
I agree wholeheartedly with your point that intent is crucial to art. Those million monkeys who eventually manage to produce a Shakespeare play are not artists. And in fact there is an element of the “million monkey illusion” in the paintings you describe, for the artist in the ‘post-curation’ process is merely selecting the good output from the bad output. Can we not say the same of an artist selecting that Shakespeare play from the zillions of pages of random text produced by the monkeys?
On the other hand, every genuine artist generates a flood of work, much of which the artist will reject as flawed. Writers are taught to “kill your own children” – write lots of stuff, and throw away most. So are artists themselves merely less random monkeys?
We have seen many attempts to generate art with computers – over thirty years ago somebody published a book of poetry generated by algorithms: “The Policeman’s Beard is Half-Constructed”.
Here’s a truly wild idea: let’s apply thermodynamics to artistic evaluation! In thermodynamics, we have a notion called “entropy” that I’ll simplify to the ratio of accepted states to generated states. (This also ties in with Shannon’s definition of information content.) In the case of the million monkeys, we ask how many scripts they had to generate to get their Shakespeare play. The smaller the number, the more “artistic” the result is. If the painting algorithm generated a thousand images for every image that was found acceptable, but another painting algorithm generates only a hundred images for every image that we find equally acceptable, then we say that the latter algorithm is more artistic.
It’s all just information, isn’t it?
Evolution is at work in computers as well as life sciences
October 16, 2018
Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University
Arend Hintze receives funding from NSF and Strength in Numbers Game Company.
Michigan State University provides funding as a founding partner of The Conversation US.
Artificial intelligence research has a lot to learn from nature. My work links biology with computation every day, but recently the rest of the world was reminded of the connection: The 2018 Nobel Prize in Chemistry went to Frances Arnold together with George Smith and Gregory Winter for developing major breakthroughs that are collectively called “directed evolution.” One of its uses is to improve protein functions, making them better catalysts in biofuel production. Another use is entirely outside chemistry – outside even the traditional life sciences.
That might sound surprising, but many research findings have very broad implications. It’s part of why just about every scientist wonders and hopes not only that maybe they would be selected for a Nobel Prize, but, far more likely, that the winner might be someone they know or have worked with. In the collaborative academic world, this isn’t terribly uncommon: In 2002, I was studying under a scholar who had studied under one of the three co-winners of that year’s Nobel Prize in Physiology or Medicine. This year, it happened again – one of the winners has written a couple of papers with a scholar I have collaborated with.
Beyond satisfying my own vanity, the award reminds me how useful biological concepts are for engineering problems. The best-known example is probably the invention of Velcro hook-and-loop fasteners, inspired by burrs that stuck to a man’s pants while he was walking outdoors. In the Nobel laureates’ work, the natural principle at work is evolution – which is also the approach I use to develop artificial intelligence. My research is based on the idea that evolution led to general intelligence in biological life forms, so that same process could also be used to develop computerized intelligent systems.
When designing AI systems that control virtual cars, for example, you might want safer cars that know how to avoid a wide range of obstacles – other cars, trees, cyclists and guardrails. My approach would be to evaluate the safety performance of several AI systems. The ones that drive most safely are allowed to reproduce – by being copied into a new generation.
Yet just as nature does not make identical copies of parents, genetic algorithms in computational evolution let mutations and recombinations create variations in the offspring. Selecting and reproducing the safest drivers in each new generation finds and propagates mutations that improve performance. Over many generations, AI systems get better through the same method nature improves upon itself – and the same way the Nobel laureates made better proteins.
In the effort to understand human intelligence, many researchers are working to reverse-engineer the brain, figuring out how it works at all levels. Complex gene networks control the neurons that form the layers of the neocortex that are sitting on top of a highway of connections. These interconnections support communications between the different cortical regions that make up most of our cognitive functions. All of this is integrated into the phenomenon of consciousness.
Deep learning and neural networks are computer-based approaches that attempt to recreate how the brain works – but even they can only achieve the equivalent activity of a clump of brain cells smaller than a sugar cube. There remains an enormous amount to learn about the brain – and that’s before trying to write the intensely complicated software that can emulate all those biological interactions.
Capitalizing on evolution can make systems that seem lifelike and are inherently as open-ended and innovative as natural evolution is. It is also the key methodology used in genetic algorithms and genetic programming. The Nobel Prize committee’s recognition highlights a technology that has evolution at its core. That indirectly justifies my own research approach and the idea that evolution in action is a critical research topic with vast potential.