The Conversation — A sampling


Staff & Wire Reports



The Conversation

Thousands of mental health professionals agree with Woodward and the New York Times op-ed author: Trump is dangerous

September 6, 2018

Author

Bandy X. Lee

Assistant Clinical Professor, Yale School of Medicine, Yale University

Disclosure statement

Bandy X. Lee does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Bob Woodward’s new book, “Fear,” describes a “nervous breakdown of Trump’s presidency.” Earlier this year, Michael Wolff’s “Fire and Fury” offered a similar portrayal.

Now, an op-ed in The New York Times by an anonymous “senior White House official” describes how deeply the troubles in this administration run and what effort is required to protect the nation.

None of this is a surprise to those of us who, 18 months ago, put together our own public service book, “The Dangerous Case of Donald Trump: 27 Psychiatrists and Mental Health Experts Assess a President.”

My focus as the volume’s editor was on Trump’s dangerousness because of my area of expertise in violence prevention. Approaching violence as a public health issue, I have consulted with governments and international organizations, in addition to 20 years of engaging in the individual assessment and treatment of violent offenders.

The book proceeded from an ethics conference I held at Yale, my home institution. At that meeting, my psychiatrist colleagues and I discussed balancing two essential duties of our profession. First is the duty to speak responsibly about public officials, especially as outlined in “the Goldwater rule,” which requires that we refrain from diagnosing without a personal examination and without authorization. Second is our responsibility to protect public health and safety, or our “duty to warn” in cases of danger, which usually supersedes other rules.

Our conclusion was overwhelmingly that our responsibility to society and its safety, as outlined in our ethical guidelines, overrode any etiquette owed to a public figure. That decision led to the collection of essays in the book, which includes some of the most prominent thinkers of the field including Robert J. Lifton, Judith Herman, Philip Zimbardo and two dozen others. That decision was controversial among some members of our field.

We already know a great deal about Trump’s mental state based on the voluminous information he has given through his tweets and his responses to real situations in real time. Now, this week’s credible reports support the concerns we articulated in the book beyond any doubt.

These reports are also consistent with the account I received from two White House staff members who called me in October 2017 because the president was behaving in a manner that “scared” them, and they believed he was “unraveling”. They were calling because of the book I edited.

Once I confirmed that they did not perceive the situation as an imminent danger, I referred them to the emergency room, in order not to be bound by confidentiality rules that would apply if I engaged with them as a treating physician. That would have compromised my role of educating the public.

The psychology behind the chaos

The author of the New York Times op-ed makes clear that the conflict in the White House is not about Trump’s ideology.

The problem, the author sees, is the lack of “any discernible first principles that guide his decision making … his impulsiveness [that] results in half-baked, ill-informed and occasionally reckless decisions that have to be walked back, and there being literally no telling whether he might change his mind from one minute to the next.”

These are obviously psychological symptoms reflective of emotional compulsion, impulsivity, poor concentration, narcissism and recklessness. They are identical to those that Woodward describes in numerous examples, which he writes were met with the “stealthy machinations used by those in Trump’s inner sanctum to try to control his impulses and prevent disasters.”

They are also consistent with the course we foresaw early in Trump’s presidency, which concerned us enough to outline it in our book. We tried to warn that his condition was worse than it appeared, would grow worse over time and would eventually become uncontainable.

What we observed were signs of mental instability – signs that would eventually play out not only in the White House, as these accounts report, but in domestic situations and in the geopolitical sphere.

There is a strong connection between immediate dangerousness – the likelihood of waging a war or launching nuclear weapons – and extended societal dangerousness – policies that force separation of children from families or the restructuring of global relations in a way that would destabilize the world.

Getting worse

My current concern is that we are already witnessing a further unraveling of the president’s mental state, especially as the frequency of his lying increases and the fervor of his rallies intensifies.

I am concerned that his mental challenges could cause him to take unpredictable and potentially extreme and dangerous measures to distract from his legal problems.

Mental health professionals have standard procedures for evaluating dangerousness. More than a personal interview, violence potential is best assessed through past history and a structured checklist of a person’s characteristics.

These characteristics include a history of cruelty to animals or other people, risk taking, behavior suggesting loss of control or impulsivity, narcissistic personality and current mental instability. Also of concern are noncompliance or unwillingness to undergo tests or treatment, access to weapons, poor relationship with significant other or spouse, seeing oneself as a victim, lack of compassion or empathy, and lack of concern over consequences of harmful acts.

The Woodward book and the New York Times op-ed confirm many of these characteristics. The rest have been evident in Trump’s behavior outside the White House and prior to his tenure.

That the president has met not just some but all these criteria should be reason for alarm.

Other ways in which a president could be dangerous are through cognitive symptoms or lapses, since functions such as reasoning, memory, attention, language and learning are critical to the duties of a president. He has exhibited signs of decline here, too.

Furthermore, when someone displays a propensity for large-scale violence, such as by advocating violence against protesters or immigrant families, calling perpetrators of violence such as white supremacists “very fine people” or showing oneself vulnerable to manipulation by hostile foreign powers, then these things can promote a much more widespread culture of violence.

The president has already shown an alarming escalation of irrational behavior during times of distress. Others have observed him to be “unstable,” “losing a step” and “unraveling.” He is likely to enter such a state again.

Violent acts are not random events. They are end products of a long process that follow recognizable patterns. As mental health experts, we make predictions in terms of unacceptable levels of probability rather than on the basis of what is certain to happen.

Trump’s impairment is a familiar pattern to a violence expert such as myself, but given his level of severity, one does not need to be a specialist to know that he is dangerous.

What next?

I believe Woodward’s book and the revelations in the New York Times op-ed have placed great pressure on the president. We are now entering a period when the stresses of the presidency could accelerate because of the advancing special counsel’s investigations.

The degree of Trump’s denial and resistance to the unfolding revelations, as expressed in a recent Fox interview, are telling of his fragility.

From my observations of the president over extended time via his public presentations, direct thoughts through tweets and accounts of his close associates, I believe that the question is not whether he will look for distractions, but how soon and to what degree.

At least several thousands of mental health professionals who are members of the National Coalition of Concerned Mental Health Experts share the view that the nuclear launch codes should not be in the hands of someone who exhibits such levels of mental instability.

Just as suspicion of crime should lead to an investigation, the severity of impairment that we see should lead to an evaluation, preferably with the president’s consent.

Mental impairment should be evaluated independently from criminal investigations, using medical criteria and standardized measures. A sitting president may be immune to indictments, but he is subject to the law, which is strict about public safety and the right to treatment when an individual poses a danger to the public because of mental instability. In the case of danger, the patient does not have the right to refuse, nor does the physician have the right not to take the person as a patient.

This evaluation may have been delayed, but it is still not too late. And mental health professionals have extensive experience assessing, restraining and treating individuals much like Trump – it is almost routine.

This is an updated version of an article originally published on September 7, 2018; it reflects new information about the author’s contact with White House staff.

Low-income neighborhoods would gain the most from green roofs in cities like Chicago

September 6, 2018

Author

Ashish Sharma

Research Assistant Professor, University of Notre Dame

Disclosure statement

Ashish Sharma does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Heat waves aren’t just a source of discomfort. They’re the nation’s deadliest weather hazard, accounting for a fifth of all deaths caused by natural hazards in the U.S.

Most of the time, low-income people who live in cities face the biggest risks tied to extreme heat. That’s because urban areas, especially neighborhoods with few parks or yards, absorb high amounts of solar radiation during the day – keeping night temperatures higher than in suburbs and rural areas.

I’m an atmospheric scientist who studies urban environments in an interdisciplinary way that combines science, engineering and social sciences. I belong to a team of researchers and other professionals that’s looking into one solution we believe will help cool off homes, businesses and other structures all summer long: green roofs.

Urban ecosystems

Green infrastructure encompasses a range of methods to manage weather impacts, providing many community benefits in cost-effective ways.

For example, using permeable pavement, planting and preserving trees and other green spaces, establishing vertical gardens on a building’s exterior and making rooftops white can all help moderate urban temperatures, cut utility bills and make neighborhoods nicer places to live.

Many cities are also experimenting with green roofs, rooftops that are partially or completely covered in drought-resistant plants with drainage and leak detection systems, to see if they can cool off urban heat.

These roofs can serve as a source of insulation or shade, cut electricity consumption, add green space and reduce air pollution. However, bunching too many of them together in large areas could actually reduce air quality by increasing humidity and pollution.

I led a recent study that used an interdisciplinary approach to see where it would make the most sense to install green roofs to cool off homes in hot neighborhoods. As we explained in Environmental Research Letters, an academic journal, we identified Chicago’s most vulnerable, heat-stressed neighborhoods – communities that would benefit most from this amenity.

Straining utilities and burdening the poor

When temperature spike in cities, electricity use rises sharply making it hard for utilities susceptible to power outages. When the lights go out, critical services such as drinking water, transportation and health care can be jeopardized. And poorer people, whose neighborhoods tend to be the hottest, can be the most at risk.

Some of the poorest Americans, of course, do not even have air conditioning. In other cases, they may have it installed but face so much economic hardship that they can’t afford to use it.

Chicago is most vulnerable to outages in July, when temperatures tend to peak. Electricity usage gets nearly as high in December due to the widespread use of Christmas lights throughout the holiday season, the electric heat consumed by 20 percent of local residents and the incidence of many of the year’s longest nights.

Green roofs can help avoid outages by lowering rooftop surface temperatures. In turn, residents may consume less air conditioning and ease the strain on the grid when it matters most. But how green roofs should be deployed to maximize these benefits remains an open question.

Where to invest

My team identified neighborhoods that had the most to gain from green roofs by figuring out which neighborhoods had the most heat vulnerability, and the greatest potential reductions in rooftop temperatures with green roofs, and used the most electricity for air conditioning.

People who reside in poor vulnerable neighborhoods consistently use relatively little air conditioning. However, businesses located in vulnerable neighborhoods do use more energy than enterprises located in more affluent areas because temperatures tend to get and stay higher in poorer neighborhoods, requiring more energy to cool down interiors.

We designed steps for urban planners and city officials to scientifically set priorities for a public effort to install green roofs, neighborhood by neighborhood.

Most of the communities we determined would get the biggest benefits from green roofs are located on Chicago’s South Side and West Side. Given that between 1986 and 2015, an average of 130 people lost their lives across U.S. every year due to heat stress, for many of these residents it could be a matter of life and death.

What the 25th Amendment says about presidents who are ‘unable’ to serve

September 6, 2018

The 25th Amendment defines what happens if a president is ‘unable’ to discharge his duties.

Author

Brian Kalt

Professor of Law and Harold Norris Faculty Scholar, Michigan State University

Disclosure statement

Brian Kalt does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Partners

Michigan State University provides funding as a founding partner of The Conversation US.

A stunning, unsigned op-ed in The New York Times reported on Sept. 5 that members of President Donald Trump’s Cabinet discussed removing him from power by using the 25th Amendment, but decided against it to avoid causing a “constitutional crisis.”

As a law professor who studies the presidency, I have written extensively on the 25th Amendment.

Interest in this form of presidential removal may be high, but evidence suggests it could not be used successfully against Trump at this point.

What is the 25th Amendment?

The U.S. Constitution has always specified that if the president suffers an “inability to discharge” his powers, the vice president takes over. But it supplied no details on how, exactly, this might be done.

The 25th Amendment, added in 1967, defines what happens if a president becomes “unable to discharge the powers and duties of his office.”

The president may declare himself unable to do his job and empower the vice president temporarily. Both Ronald Reagan and George W. Bush used this process before being sedated for surgery.

Alternatively, the vice president and a majority of the Cabinet may deem the president “unable to discharge the powers and duties of his office” and transfer power to the vice president. The president may later declare himself able and try to retake power.

But if the vice president and Cabinet object within four days, and are backed by two-thirds majorities in both the House and Senate, the vice president stays in power.

Impeachment and the 25th Amendment

The latter provision, which constitutes Section 4 of the 25th Amendment, is the “complex process for removing the president” referred to by the anonymous New York Times op-ed writer.

Section 4 has never been used. But it was seriously considered once.

In 1987, during a changeover in staff, President Reagan’s incoming team was advised to think about using Section 4. Mired in scandal, recovering from surgery and discouraged by Republicans’ disastrous results in the 1986 congressional elections, Reagan had become so disengaged that staffers reportedly signed his name to documents he’d never even read.

Reagan soon bounced back, showing himself quite capable of discharging his powers and duties. His new staff dropped any consideration of Section 4.

My understanding is that “unable” means being incapable of wielding power – not using it destructively. When a president misuses his powers, impeachment is the Constitution’s designated remedy.

By design, successfully using Section 4 requires much more support than impeachment, which needs just majority support in the House and two-thirds in the Senate. Displacing the president using the 25th Amendment, on the other hand, requires the additional support of the vice president, the Cabinet, and more of the House.

Because President Trump currently has enough support to avoid serious impeachment efforts, Section 4 seems wholly unfeasible.

How passports evolved to help governments regulate your movement

September 7, 2018

Author

John Torpey

Presidential Professor of Sociology and History, City University of New York

Disclosure statement

John Torpey receives funding from the German Marshall Fund.

The Trump administration is denying passports to U.S. citizens who live in Texas near the U.S.-Mexico border, according to news reports.

The administration is accusing applicants of having inadequate documentation of their birth on U.S. soil, and refusing to issue them passports on that basis.

Critics argue this is part of a tide of anti-immigrant measures that includes other Trump administration efforts to restrict entry to the U.S. Those measures range from the travel ban on Muslims from certain countries entering the U.S. to White House proposals to develop a merit-based immigration system.

Meanwhile, the entry of thousands of immigrants and refugees into Europe in recent years has generated a populist backlash against outsiders.

These developments raise fundamental questions about migration from country to country: When and how did governments get the power to limit people’s movements? And how did passports come to play such a crucial role?

I explored these questions in the research I did for my book, “The Invention of the Passport.” I believe this history can help us understand how governments have assumed so much control over where people can go.

Moving around

Throughout much of European and American history, labor was forced. Both landowners and states sought to restrict the movement of slaves and serfs in order to prevent the loss of their labor forces. Before the 19th century, however, their ability to keep people from leaving was tenuous and a major source of concern for their owners. In the United States, patrols helped enforce fugitive slave laws, but their reach was limited.

Nobles, merchants and free peasants may have moved about freely, but could be shut in or out of a city in an emergency if the gates were closed.

Until fairly recently, stopping people from leaving a plantation or farm was more important to governments than keeping people from coming in, at least during peaceful times.

That changed following the French Revolution, which began in 1789. Nationalism – the idea that particular “peoples” or “nations” should govern themselves – became a powerful force in Europe and, gradually, around the world. By the middle of the 19th century, both U.S. slavery and European serfdom declined as a result of rising notions of “free labor” and the desire to make populations feel a sense of belonging to the country. The shift toward free, mobile labor meant people had more opportunity than ever to move around.

There were major exceptions: By the early 20th century, the overwhelming majority of states in the world were still authoritarian or colonial. People who lived there could not freely move about.

However, after World War II and the gradual breakup of colonial empires, moving within countries came to be widely understood as a matter of individual freedom. Such movement facilitated the ability of laborers to go where they were needed, and thus tended to be supported by governments.

People leaving a country might still have been regulated by their government in the post-war era. But this became less of a concern because democracy spread. More democratic countries were less worried about people leaving than were those that forced their populations to stay and work, such as those “behind the Iron Curtain.”

It was control over the entry of outsiders that became paramount with the mid-20th-century triumph of nation-states. Foreigners, the thinking goes, might not have the interests of “the people” at heart. A kind of permanent suspicion took hold in which foreigners were deemed ineligible for entry without evidence that they would not become troublesome. Possession of a passport helped promote that by showing who a person was and where they could be sent if they proved undesirable.

As I argue in my book, this transformation in regulating movement created a new world that would be largely unrecognizable to those who lived before World War I. Governments everywhere now restrict during peacetime the entry of people they deem “undesirable” on criminal, ethnic, economic, medical and demographic grounds.

Meanwhile, movement within countries loosened up, although particular spaces – such as military bases, prisons and areas containing valued resources – often remain off limits to many.

Since then, crossing international borders has become the big challenge for people wishing to move. Passports became key to regulating this process.

Papers, please

Passports, seemingly modest documents, were introduced gradually in many places in the modern world. In the United States, the federal government in 1856 asserted the exclusive right to issue passports and mandated that they be issued only to U.S. citizens.

Once simple pieces of paper, passports have evolved into standardized booklets that identify persons and tell governments where they should be sent if they are deemed inadmissible – their fundamental purpose in international law.

Today, passports are perceived mainly as documents that are used to constrain entry into a country, weeding out the relatively rare individual who might be a criminal, a terrorist or someone otherwise at odds with the receiving government’s preferences.

Since the 9/11 terrorist attacks, governments have developed a greater interest in technological means of identifying border-crossers. For example, governments that belong to the standard-setting International Civil Aviation Organization have developed machine-readable passports with encrypted identification information, making them harder for anyone to use other than the actual bearer.

Those whose movements are being scrutinized so intently today in North America and Europe are from countries whose citizens are often regarded as undesirable due to poverty, culture, religion or other attributes. Entry of these outsiders has generated a wave of support for nationalist, populist parties that are upending the traditional openness to foreigners in the United States and fueling xenophobia in Europe.

By challenging the passport applications of people born near the Mexican border, the Trump administration is also reminding us that passports are a reflection of one’s citizenship. Without one, you can’t leave the country and count on being able to return. Their freedom to remain in the U.S. is at risk.

We live in a world in which the entry of those who are deemed “desireable” is greatly facilitated, while that of those deemed “undesirable” is greatly constrained. Freedom of movement into other countries is a reliable expectation only for those from the rich world with no blemishes on their records; for the rest, crossing borders can be very difficult, indeed.

Canada will be part of Trump’s new NAFTA – corporate lobbyists on both sides of the border will ensure it

September 7, 2018

Author

Christina Fattore

Associate Professor of Political Science, West Virginia University

Disclosure statement

Christina Fattore does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Partners

West Virginia University provides funding as a member of The Conversation US.

The announcement last month that the U.S. and Mexico had reached an agreement to replace NAFTA without Canada surprised trade experts around the globe. A deadline of Aug. 31 was set for the Canadians to join or be left out in the cold – and hit with fresh tariffs.

The news was stunning because negotiators for all three countries had been trying to hammer out a new accord for over a year, ever since President Donald Trump followed through on his campaign threat to demand the North American Free Trade Agreement be scrapped or replaced.

After the arbitrary deadline passed without any concessions from the Canadians, let alone a finalized deal, Trump once again threatened to exclude Canada from the new NAFTA via Twitter.

While his blustering included a threat to end NAFTA entirely, it is all bark and no bite. What trade scholars like me know is that Trump does not have the upper hand in these negotiations.

Interest groups on both sides of the border will ensure that Canada is in the deal – and legally, it would be cumbersome to do a deal that excludes the Canadians.

Interest groups usually win

In his tweets, Trump claimed that there was “no political necessity to keep Canada in the new NAFTA deal.” However, Canada does not seem to be feeling any sense of impending doom – and for good reason.

After Trump’s threats, Prime Minister Justin Trudeau said a compromise “will hinge on whether there is ultimately a good deal for Canada. No NAFTA deal is better than a bad NAFTA deal.”

In my own research I’ve studied how interest groups influence trade policy, specifically in World Trade Organization dispute initiation and litigation. My work illustrates how countries depend upon industry interest groups – and in some cases, companies themselves – to help shape trade policy.

This research borrows from the work of Princeton politics professor Andrew Moravcsik, who theorized that countries – especially democratic ones – primarily represent the preferences of domestic interest groups when engaged in international negotiations, and will rarely kowtow to the desires of trading partners.

In other words, governments want to stay in power and get re-elected. They need votes and campaign contributions to meet that goal, and corporate and industry interest groups can provide both.

That’s why Trudeau continues to stand firm that any deal with the U.S. and Mexico protect Canadian middle class jobs by protecting domestic dairy and poultry production and why he insists a so-called cultural exemption that safeguards domestic television and radio from takeovers by U.S. media conglomerates be included in the new NAFTA.

Trudeau and his team of negotiators are not going to sing to the tune of Trump’s tweets. Rather, they’re following the standard political economist playbook: protect those industries and sectors that can help carry Trudeau to another win in the federal elections 13 months from now.

Americans first

On the other side of the table, there’s Trump.

He professes to have American interests at heart in his hard-line handling of Canada in these ongoing NAFTA negotiations. And he has framed NAFTA as a disaster and an agreement that has brought “the U.S. … decades of abuse” at the hands of Canada.

What Trump hesitates to acknowledge is the interdependence between the U.S. and Canadian economies. Both countries need each other.

Canada is the United States’ second-largest trading partner, with US$673 billion in total goods and services crossing the border in 2017. The U.S. Department of Commerce estimates that exports to Canada support over 1.5 million jobs, heavily concentrated in border states that went for Trump in the 2016 presidential election.

Take the automobile industry as an example: If Canada was left out of NAFTA, car prices could rise in the U.S. due to proposed new tariffs on Canadian autos. And Canadians are already discussing a boycott of American goods if negotiations turn sour, which could also lead to a decline in the sales of American cars.

If consumers in Canada and other countries are buying fewer American cars as a result of these trade disputes, that could lead to layoffs. The possible downward spiral that could result has both the auto industry and labor unions concerned about a NAFTA with no Canada.

And it’s not just cars. If Canada is kicked out of the new NAFTA, Americans would see a number of industries negatively affected, from oil production to retail stores to tourism as Canadians will choose to buy more domestic products to avoid American ones.

Basically, a NAFTA without Canada is a lose-lose situation for all involved. And while Trump may be willing to ignore the wishes of some interest groups since he has two years before he faces re-election, most lawmakers in Congress don’t have that luxury as the midterms fast approach.

It’s hard for me to imagine that Congress would support a NAFTA minus Canada, regardless of who controls the House in January 2019.

Three is not a crowd

The idea of throwing NAFTA out all together is, in my view, ludicrous, as industry on both sides of the border will not stand for it and Congress will not support it.

Trump is also legally restricted. While he has been granted fast-track authority by Congress to renegotiate NAFTA, this only allows Trump to ask lawmakers to approve a deal, by an up or down vote, that includes all three countries. If the current negotiations fail and Trump presents Congress with a trade deal with Mexico alone, the process will be slow and could be held up significantly – especially if there’s a change in control of the House.

Considering manufacturing interests were pro-NAFTA in 1994 and continues to derive benefits from the treaty today, North Americans can expect whatever replaces that deal to continue with Canada well into the future regardless of how long these negotiations take.

Key internet connections and locations at risk from rising seas

September 7, 2018

Author

Carol Barford

Associate Scientist; Director, Center for Sustainability and the Global Environment, University of Wisconsin-Madison

Disclosure statement

Carol Barford receives research funding from DOE, NASA, NSF, and USDA.

Despite whimsical ads about computing “in the cloud,” the internet lives on the ground. Data centers are built on land, and most of the physical elements of the internet – such as the cables that connect households to internet services and the fiber optic strands carrying data from one city to another – are buried in plastic conduit under the dirt. That system has worked quite well for many years, but there may be less than a decade to adapt it to the changing global climate.

Most of the current internet infrastructure in the U.S. was built in the 1990s and 2000s to serve major population centers on the coasts. As new connections were built, companies built them alongside roads and railroads – which often hug coastlines. Recent mapping of the physical internet by computer scientists Paul Barford and Ram Durairajan identified exactly how many key network locations were how close to the shore. Building on that work, I joined them to study the risk to the internet from rising oceans.

The basic approach was simple: Take the map of internet hardware and line it up with a map of projected sea-level rise to see where network infrastructure may be underwater in the coming years.

Understanding the threats

Where it’s not underground, much of the internet is actually underwater already: A physical web of undersea cables carries massive amounts of data between continents in milliseconds. Those cables are protected with tough steel housings and rubber cladding to protect them from the ocean. They connect to the land network, though, which was not designed with water in mind. If the plastic pipes carrying wires underground were to flood, the water could freeze and thaw, damaging or even breaking wires. It could also corrode electronics and interrupt fiber optic signals.

To identify what was now dry but will one day likely get wet, we had to sort through a wide range of potential scenarios, mainly varying estimates of how human-generated greenhouse gas emissions will change over time. We settled on the one created by the National Oceanic and Atmospheric Administration and recommended for analysis of situations involving expensive long-term investments, like for infrastructure projects.

Based on the assumption that global greenhouse gas emission trends will continue in their current relationship to human population and economic activity, that model expects global average sea levels to rise one foot by 2030, and a further five feet by 2100.

Although this may sound improbably high, a more recent report by NOAA also includes an even higher “extreme” scenario, which takes into account the mounting evidence of more rapid melting in Greenland and Antarctic glaciers.

The effects of rising waters

What we found was not particularly surprising, but it was alarming: The internet is very vulnerable to damage from sea-level rise between now and 2030. Thousands of miles of cables now safely on dry land will be underwater. Dozens of ocean-cable landing stations will be too, along with hundreds of data centers and network-interconnection locations called “points of presence.”

There will be further damage by 2100 – though the vast majority of the danger is between now and 2030. In some metropolitan areas, between one-fifth and a quarter of local internet links are at risk, and nearly one-third of intercity cables.

Our study also found that risks to internet infrastructure are not the same everywhere. New York City and New Jersey are especially vulnerable, in part because they are home to many ocean landing sites and data centers, as well as lots of metro and long-haul cable. In addition, the mid-Atlantic U.S. coast is sinking up to an inch per decade. The Atlantic coast is also relatively close to the Greenland ice cap, which does have regional effects on sea level.

Questions for the future

It’s important to note that these risks do not necessarily mean that U.S. internet service will get worse or be disconnected by 2030. For one thing, the companies that operate these cables and facilities may choose to relocate them to safer ground – but the costs of that may be passed on to customers.

And even if companies don’t move their equipment, the internet has many redundant pathways for data. Even a single email message is broken into small pieces that may follow separate paths to the recipient’s computer. The systems that manage this routing could potentially handle the additional traffic around wet areas – but that may affect service quality.

We’re planning to study the potential effects to the network and its users in future research. For now, though, it’s safe to say that internet service in several U.S. coastal cities will need to adapt to sea-level rise, and someone will need to pay for it.

Discovering the ancient origin of cystic fibrosis, the most common genetic disease in Caucasians

September 7, 2018

Author

Philip Farrell

Professor of Pediatrics and Population Health Sciences, University of Wisconsin-Madison

Disclosure statement

Philip Farrell receives funding from the NIH and CF Foundation.

Imagine the thrill of discovery when more than 10 years of research on the origin of a common genetic disease, cystic fibrosis (CF), results in tracing it to a group of distinct but mysterious Europeans who lived about 5,000 years ago.

CF is the most common, potentially lethal, inherited disease among Caucasians – about one in 40 carry the so-called F508del mutation. Typically only beneficial mutations, which provide a survival advantage, spread widely through a population.

CF hinders the release of digestive enzymes from the pancreas, which triggers malnutrition, causes lung disease that is eventually fatal and produces high levels of salt in sweat that can be life-threatening.

In recent years, scientists have revealed many aspects of this deadly lung disease which have led to routine early diagnosis in screened babies, better treatments and longer lives. On the other hand, the scientific community hasn’t been able to figure out when, where and why the mutation became so common. Collaborating with an extraordinary team of European scientists such as David Barton in Ireland and Milan Macek in the Czech Republic, in particular a group of brilliant geneticists in Brest, France led by Emmanuelle Génin and Claude Férec, we believe that we now know where and when the original mutation arose and in which ancient tribe of people.

We share these findings in an article in the European Journal of Human Genetics which represents the culmination of 20 years’ work involving nine countries.

What is cystic fibrosis?

My quest to determine how CF arose and why it’s so common began soon after scientists discovered the CFTR gene causing the disease in 1989. The most common mutation of that gene that causes the disease was called F508del. Two copies of the mutation – one inherited from the mother and the other from the father – caused the lethal disease. But, inheriting just a single copy caused no symptoms, and made the person a “carrier.”

I had been employed at the University of Wisconsin since 1977 as a physician-scientist focusing on the early diagnosis of CF through newborn screening. Before the gene discovery, we identified babies at high risk for CF using a blood test that measured levels of protein called immunoreactive trypsinogen (IRT). High levels of IRT suggested the baby had CF. When I learned of the gene discovery, I was convinced that it would be a game-changer for both screening test development and epidemiological research.

That’s because with the gene we could offer parents a more informative test. We could tell them not just whether their child had CF, but also whether they carried two copies of a CFTR mutation, which caused disease, or just one copy which made them a carrier.

Parents carrying one good copy of the CF gene (R) and one bad copy of the mutated CF gene (r) are called carriers. When both parents transmit a bad copy of the CF gene to their offspring, the child will suffer from cystic fibrosis. Children who inherit just one bad copy will be carriers like their parents and can transmit the gene to their children. Cburnett, CC BY-SA

One might ask what is the connection between studying CF newborn screening and learning about the disease origin. The answer lies in how our research team in Wisconsin transformed a biochemical screening test using the IRT marker to a two-tiered method called IRT/DNA.

Because about 90 percent of CF patients in the U.S. and Europe have at least one F508del mutation, we began analyzing newborn blood for its presence whenever the IRT level was high. But when this two-step IRT/DNA screening is done, not only are patients with the disease diagnosed but also tenfold more infants who are genetic carriers of the disease are identified.

As preconception-, prenatal- and neonatal screening for CF have proliferated during the past two decades, the many thousands of individuals who discovered they were F508del carriers and their concerned parents often raised questions about the origin and significance of carrying this mutation themselves or in their children. Would they suffer with one copy? Was there a health benefit? It has been frustrating for a pediatrician specializing in CF to have no answer for them.

The challenge of finding origin of the CF mutation

I wanted to zero in on when this genetic mutation first starting appearing. Pinpointing this period would allow us to understand how it could have evolved to provide a benefit – at least initially – to those people in Europe who had it. To expand my research, I decided to take a sabbatical and train in epidemiology while taking courses in 1993 at the London School of Hygiene and Tropical Medicine.

The timing was perfect because the field of ancient DNA research was starting to blossom. New breakthrough techniques like the Polymerase Chain Reaction made it possible to study the DNA of mummies and other human archaeological specimens from prehistoric burials. For example, early studies were performed on the DNA from the 5,000-year-old Tyrolean Iceman, which later became known as Ötzi.

I decided that we might be able to discover the origin of CF by analyzing the DNA in the teeth of Iron Age people buried between 700-100 B.C. in cemeteries throughout Europe.

Using this strategy, I teamed up with archaeologists and anthropologists such as Maria Teschler-Nicola at the Natural History Museum in Vienna, who provided access to 32 skeletons buried around 350 B.C. near Vienna. Geneticists in France collected DNA from the ancient molars and analyzed the DNA. To our surprise, we discovered the presence of the F508del mutation in DNA from three of 32 skeletons.

This discovery of F508del in Central European Iron Age burials radiocarbon-dated to 350 B.C. suggested to us that the original CF mutation may have arisen earlier. But obtaining Bronze Age and Neolithic specimens for such direct studies proved difficult because fewer burials are available, skeletons are not as well-preserved and each cemetery merely represents a tribe or village. So rather than depend on ancient DNA, we shifted our strategy to examine the genes of modern humans to figure out when this mutation first arose.

Why would a harmful mutation spread?

To find the origin of CF in modern patients, we knew we needed to learn more about the signature mutation – F508del – in people who are carriers or have the disease.

This tiny mutation causes loss of one amino acid out of the 1,480 amino acid chain and changes the shape of a protein on the surface of the cell that moves chloride in and out of the cell. When this protein is mutated, people carrying two copies of it – one from the mother and one from the father – are plagued with thick sticky mucus in their lungs, pancreas and other organs. The mucus in their lungs allows bacteria to thrive, destroying the tissue and eventually causing the lungs to fail. In the pancreas, the thick secretions prevent the gland from delivering the enzymes the body needs to digest food.

So why would such a harmful mutation continue to be transmitted from generation to generation?

A mutation as harmful as F508del would never have survived among people with two copies of the mutated CFTR gene because they likely died soon after birth. On the other hand, those with one mutation may have a survival advantage, as predicted in Darwin’s “survival of the fittest” theory.

Perhaps the best example of a mutation favoring survival under stressful environmental conditions can be found in Africa, where fatal malaria has been endemic for centuries. The parasite that causes malaria infects the red blood cells in which the major constituent is the oxygen-carrying protein hemoglobin. Individuals who carry the normal hemoglobin gene are vulnerable to this mosquito-borne disease. But those who are carriers of the mutated “hemoglobin S” gene, with only one copy, are protected from severe malaria. However two copies of the hemoglobin S gene causes sickle cell disease, which can be fatal.

Here there is a clear advantage to carrying one mutant gene – in fact, about one in 10 Africans carries a single copy. Thus, for many centuries an environmental factor has favored the survival of individuals carrying a single copy of the sickle hemoglobin mutation.

Similarly we wondered whether there was a health benefit to carrying a single copy of this specific CF mutation during exposures to environmentally stressful conditions. Perhaps, we reasoned, that’s why the F508del mutation was common among Caucasian Europeans and Europe-derived populations.

Clues from modern DNA

To figure out the advantage of transmitting a single mutated F508del gene from generation to generation, we first had to determine when and where the mutation arose so that we could uncover the benefit this mutation conferred.

We obtained DNA samples from 190 CF patients bearing F508del and their parents residing in geographically distinct European populations from Ireland to Greece plus a Germany-derived population in the U.S. We then identified a collection of genetic markers – essentially sequences of DNA – within the CF gene and flanking locations on the chromosome. By identifying when these mutations emerged in the populations we studied, we were able to estimate the age of the most recent common ancestor.

Next, by rigorous computer analyses, we estimated the age of the CF mutation in each population residing in the various countries.

We then determined that the age of the oldest common ancestor is between 4,600 and 4,725 years and arose in southwestern Europe, probably in settlements along the Atlantic Ocean and perhaps in the region of France or Portugal. We believe that the mutation spread quickly from there to Britain and Ireland, and then later to central and southeastern European populations such as Greece, where F508del was introduced only about 1,000 years ago.

Who spread the CF mutation throughout Europe?

Thus, our newly published data suggest that the F508del mutation arose in the early Bronze Age and spread from west to southeast Europe during ancient migrations.

Moreover, taking the archaeological record into account, our results allow us to introduce a novel concept by suggesting that a population known as the Bell Beaker folk were the probable migrating population responsible for the early dissemination of F508del in prehistoric Europe. They appeared at the transition from the Late Neolithic period, around 4000 B.C., to the Early Bronze Age during the third millennium B.C. somewhere in Western Europe. They were distinguished by their ceramic beakers, pioneering copper and bronze metallurgy north of the Alps and great mobility. All studies, in fact, show that they were into heavy migration, traveling all over Western Europe.

Over approximately 1,000 years, a network of small families and/or elite tribes spread their culture from west to east into regions that correspond closely to the present-day European Union, where the highest incidence of CF is found. Their migrations are linked to the advent of Western and Central European metallurgy, as they manufactured and traded metal goods, especially weapons, while traveling over long distances. It is also speculated that their travels were motivated by establishing marriage networks. Most relevant to our study is evidence that they migrated in a direction and over a time period that fit well with our results. Recent genomic data suggest that both migration and cultural transmission played a major role in diffusion of the “Beaker Complex” and led to a “profound demographic transformation” of Britain and elsewhere after 2400 B.C.

Determining when F508del was first introduced in Europe and discovering where it arose should provide new insights about the high prevalence of carriers – and whether the mutation confers an evolutionary advantage. For instance, Bronze Age Europeans, while migrating extensively, were apparently spared from exposure to endemic infectious diseases or epidemics; thus, protection from an infectious disease, as in the sickle cell mutation, through this genetic mutation seems unlikely.

As more information on Bronze Age people and their practices during migrations become available through archaeological and genomics research, more clues about environmental factors that favored people who had this gene variant should emerge. Then, we may be able to answer questions from patients and parents about why they have a CFTR mutation in their family and what advantage this endows.

Fossil fuel divestment debates on campus spotlight the societal role of colleges and universities

September 7, 2018

Author

Jennie C. Stephens

Dean’s Professor of Sustainability Science & Policy and Director, School of Public Policy & Urban Affairs, Northeastern University

Disclosure statement

Jennie C. Stephens receives funding from the National Science Foundation. She is affiliated with New England Women in Energy and the Environment, the Union of Concerned Scientists and Mothers out Front.

As a new academic year begins after a summer of deadly heat waves, wildfires, droughts and floods, many college students and faculty are debating whether and how to get involved in climate politics.

Climate advocacy has become well established on U.S. campuses over the past decade, in diverse forms. More than 600 colleges and universities have signed the American College and University President’s Climate Commitment. Schools are expanding interdisciplinary teaching and research in environmental studies, sustainability science and climate resilience, and investing in “greening” their campuses. And many activists on campuses around the country are participating in global campaigns like “Rise for Climate, Jobs and Justice” and “Keep it in the Ground.”

One of the most controversial strategies is campaigning for schools to divest their holdings in fossil fuel companies. Campus divestment is widely viewed as mainly a student cause. But when I analyzed the movement with Peter Frumhoff of the Union of Concerned Scientists and Yale (now Stanford) graduate student Leehi Yona, we found widespread faculty support for divestment. For example, in a survey at Harvard in spring 2018, 67 percent of faculty respondents supported divestment, while only 9 percent were opposed and 24 percent were neutral.

So far, however, only about 150 campuses worldwide have committed to fossil fuel divestment – and less than a third of those are in the United States. Why so few? I see two reasons. First, divestment is controversial because it acknowledges the need for radical change. Second, there is a disconnect in institutional priorities between administrators on one side and faculty and students on the other side.

Instead of divesting its fossil fuel holdings, Harvard University has committed to become fossil fuel-free by 2050 and fossil fuel-neutral by 2026.

A growing global movement

Fossil fuel divestment is intended to stigmatize the industry and hold companies accountable for opposing action to slow climate change and for their strategic misinformation campaign designed to confuse the public about climate science and the risks of climate change.

To date, over 800 institutions with assets valued at over US$6 trillion have committed to some form of fossil fuel divestment. They include the Rockefeller Brothers Fund, the Guardian Media Group and the World Council of Churches.

New York City has set a goal of divesting its pension funds from fossil fuel companies by 2023. And in July 2018, the Irish parliament passed a bill making Ireland the first country in the world to divest from fossil fuels.

Student and faculty support

Our analysis of campus support for divestment focused on 30 colleges and universities in the United States and Canada. We reviewed the number and type of faculty at these schools who had signed publicly available letters endorsing fossil fuel divestment. Over 4,550 faculty had taken such positions, representing all major disciplines and fields. They included 30 members of the National Academies of Sciences, Engineering and Medicine and two Nobel laureates. These findings suggest that faculty engagement in the divestment movement is broader than generally realized.

Faculty support reflects concern about fossil fuel companies’ negative influence in our political system, in our increasingly unequal economy and in public understanding of science. Faculty are also concerned about the industry’s direct influence over research and teaching within higher education.

A wide array of U.S. schools, ranging from large state universities to prestigious elite institutions such as Harvard and MIT, have received financial support from individuals or foundations whose wealth comes from fossil fuels. Many professors and students are concerned about how these relationships constrain campus research, inquiry and conversation about responses to climate change and the need for radical change in energy systems.

Resisting calls to divest

So why have leaders at institutions like Harvard, Swarthmore and Middlebury resisted faculty and student calls to divest? Many administrators cite their fiduciary responsibility to maximize returns on endowment investments. However, a recent study that compared financial performance of investment portfolios with and without fossil fuel companies from 1927-2016 found that fossil fuel divestment did not reduce investment portfolio performance.

Administrators also often contend that their school’s investments should not be politicized. They say the endowment is not an appropriate lever for social change. But there is no such thing as an apolitical investment. Every investment does, in fact, influence change in one way or another. Many schools are now implicitly acknowledging this by developing guidelines for socially or environmentally responsible investing.

Senior administrators may also fear alienating important university constituents who are connected to the fossil fuel industry. They may feel a need to protect direct or indirect funding from fossil fuel companies for academic programs, or to maintain a non-threatening environment for board members with fossil fuel interests.

Higher education administrators also resist calls to divest because they recognize the potential for campus activists to call for divesting from other ethically challenged businesses, including tobacco and firearms. As social impact investing grows, it is not clear whether or how fossil fuel energy companies will be integrated.

Colleges and universities as citizens

The core missions of our institutions of higher education are to generate knowledge and educate citizens and leaders. Many schools also embrace a third role: addressing pressing social issues, whether through research and teaching or other strategies – for example, protecting undocumented students from the Trump administration’s aggressive enforcement of immigration law.

Education scholars have argued that all universities transmit powerful educational messages far beyond their specific teaching and research activities. Concepts of “universities as citizens” or “universities as change agents” capture the potential for universities to be active, contributing, influential and responsive members of society. Higher education thought leader Richard Freeland and many others have argued that colleges and universities have a responsibility to cultivate civic responsibility and citizenship via a scholarship of public engagement.

As disruptions linked to climate change become more intense, many faculty and students are asking why their schools are not explicitly incorporating their strategic societal priorities into financial decisions and investment portfolios. Under the Trump administration, standing up against misinformation about climate change takes on greater urgency.

That’s why I believe fossil fuel divestment raises important questions about the changing role and responsibilities of higher education in society. At this moment in human history, education must engage with how to bridge the gap between knowledge and action. Divestment debates are forcing colleges and universities to reconsider how to contribute to a more resilient and sustainable future.

Designing greener streets starts with finding room for bicycles and trees

September 6, 2018

Author

Anne Lusk

Research Scientist, Harvard University

Disclosure statement

Anne Lusk received funding from Helen and William Mazer Foundation to conduct this research.

City streets and sidewalks in the United States have been engineered for decades to keep vehicle occupants and pedestrians safe. If streets include trees at all, they might be planted in small sidewalk pits, where, if constrained and with little water, they live only three to 10 years on average. Until recently, U.S. streets have also lacked cycle tracks – paths exclusively for bicycles between the road and the sidewalk, protected from cars by some type of barrier.

Today there is growing support for bicycling in many U.S. cities for both commuting and recreation. Research is also showing that urban trees provide many benefits, from absorbing air pollutants to cooling neighborhoods. As an academic who has focused on the bicycle for 37 years, I am interested in helping planners integrate cycle tracks and trees into busy streets.

Street design in the United States has been guided for decades by the American Association of State Highway and Transportation Officials, whose guidelines for developing bicycle facilities long excluded cycle tracks. Now the National Association of City Transportation Officials, the Federal Highway Administration and the American Association of State Highway and Transportation Officials have produced guidelines that support cycle tracks. But even these updated references do not specify how and where to plant trees in relation to cycle tracks and sidewalks.

In a study newly published in the journal Cities and spotlighted in a podcast from the Harvard T. H. Chan School of Public Health, I worked with colleagues from the University of Sao Paulo to learn whether pedestrians and bicyclists on five cycle tracks in the Boston area liked having trees, where they preferred the trees to be placed and whether they thought the trees provided any benefits. We found that they liked having trees, preferably between the cycle track and the street. Such additions could greatly improve street environments for all users.

Separating pedestrians and cyclists from cars

To assess views about cycle tracks and trees, we showed 836 pedestrians and bicyclists on five existing cycle tracks photomontages of the area they were using and asked them to rank whether they liked the images or not. The images included configurations such as a row of trees separating the cycle track from the street or trees in planters extending into the street between parked cars. We also asked how effectively they thought the trees a) blocked perceptions of traffic; b) lessened perceptions of pollution exposure; and c) made pedestrians and bicyclists feel cooler.

Respondents strongly preferred photomontages that included trees. The most popular options were to have trees and bushes, or just trees, between the cycle track and the street. This is different from current U.S. cycle tracks, which typically are separated from moving cars by white plastic delineator posts, low concrete islands or a row of parallel parked cars.

Though perception is not reality, respondents also stated that having trees and bushes between the cycle track and the street was the option that best blocked their view of traffic, lessened their feeling of being exposed to pollution and made them feel cooler.

Factoring in climate change

Many city leaders are looking for ways to combat climate change, such as reducing the number of cars on the road. These goals should be factored into cycle track design. For example, highway engineers should ensure that cycle tracks are wide enough for bicyclists to travel with enough width to pass, including wide cargo bikes, bikes carrying children or newer three-wheeled electric bikes used by seniors.

Climate change is increasing stress on street trees, but better street design can help trees flourish. Planting trees in continuous earth strips, instead of isolated wells in the sidewalk, would enable their roots to trade nutrients, improving the trees’ chances of reaching maturity and ability to cool the street.

Drought weakens trees and makes them more likely to lose limbs or be uprooted. Street drainage systems could be redesigned to direct water to trees’ root systems. Hollow sidewalk benches could store water routed down from rooftops. If these benches had removable caps, public works departments could add antibacterial or anti-mosquito agents to the water. Gray water could also be piped to underground holding tanks to replenish water supplies for trees.

Thinking more broadly about street design

The central argument against adding cycle tracks with trees to urban streets asserts that cities need this space for parallel-parked cars. But cars do not have to be stored on the side of the road. They also can be stored vertically – for example, in garages, or stacked in mechanical racks on urban lots.

Parking garages could increase occupancy by selling deeded parking spaces to residents who live nearby. Those spaces could provide car owners with a benefit the street lacks: outlets for charging electric vehicles, which rarely are available to people who rent apartments.

Bus rapid transit proponents might suggest that the best use of street width is dedicated bus lanes, not cycle tracks or street trees. But all of these options can coexist. For example, a design could feature a sidewalk, then a cycle track, then street trees planted between the cycle track and the bus lane and in island bus stops. The trees would reduce heat island effects from the expansive hardscape of the bus lane, and bus riders would have a better view.

More urban trees could lead to more tree limbs knocking down power lines during storms. The ultimate solution to this problem could be burying power lines to protect them from high winds and ice storms. This costs money, but earlier solutions included only the conduit for the buried power lines. When digging trenches to bury power lines, a parallel trench could be dug to bury pipes that would supply water and nutrients to the trees. The trees would then grow to maturity, cooling the city and reducing the need for air conditioning.

Climate street guidelines for US cities

To steer U.S. cities toward this kind of greener streetscape, urban scholars and planning experts need to develop what I call climate street guidelines. Such standards would offer design guidance that focuses on providing physiological and psychological benefits to all street users.

Developers in the United States have been coaxed into green thinking through tax credits, expedited review and permitting, design/height bonuses, fee reductions and waivers, revolving loan funds and the U.S. Green Building Council’s Leadership in Energy and Environmental Design rating system. It is time to put equal effort into designing green streets for bicyclists, pedestrians, bus riders and residents who live on transit routes, as well as for drivers.

Why Bobi Wine represents such a big threat to Museveni

August 31, 2018

Authors

Richard Vokes

Associate professor, University of Western Australia

Sam Wilkins

PhD Student in Politics, University of Oxford

Disclosure statement

Richard Vokes is at the University of Western Australia. He has received funding from the Economic and Social Research Council (UK), Wenner Gren (USA), The Royal Society of New Zealand, the British Institute in Eastern Africa, the British Library and the Australian Research Council. He is President of the Australian Anthropological Society, and Editor of the Journal of Eastern African Studies.

Sam Wilkins has received funding from the University of Oxford and the British Institute in Eastern Africa. He is affiliated with the University of Melbourne.

Partners

University of Western Australia provides funding as a founding partner of The Conversation AU.

University of Oxford provides funding as a member of The Conversation UK.

Over the past fortnight, Uganda has been convulsed by the fallout from the arrest of opposition MP Robert Kyagulanyi – better known as Afro-beat pop superstar Bobi Wine. His arrest, along with others opposed to the government, led to violent street protests in the capital Kampala and other urban centres.

The current upheavals began in mid-August when President Yoweri Museveni, Bobi Wine, and other opposition MPs descended on the north-western town of Arua to campaign in a by-election.

After several hours of raucous campaigning on all sides, the president’s motorcade was attacked with stones as it left the town, allegedly by Bobi Wine’s supporters. Museveni reached his helicopter unharmed. But his security detail returned to Arua and unleashed a wave of violence against the crowds still gathered there.

In the ensuing melee Bobi Wine, five other opposition MPs, two journalists and at least 28 other people were arrested. Bobi Wine’s driver – Yasiin Kawuma – was shot dead. Over the following days, other opposition figures were also arrested.

Almost immediately after news broke of the arrests and Kawuma’s death, street protests erupted in Kampala. These initially centred on the poor neighbourhood of Kamwokya (where Bobi Wine’s studio is located) and Kyadondo East (his constituency), but quickly spread. The unrest worsened as news emerged that Bobi Wine and the other arrested MPs had been badly mistreated in custody. When he finally appeared in court 10 days later he could barely walk.

The growing protests drew a sharp response from the security services. The violence left dozens of people hospitalised, and at least two dead. Journalists writing about the affair have been threatened.

The arrest and intimidation of opposition figures isn’t new in Museveni’s Uganda. Even so, the speed and severity of the security forces’ response was shocking. Their initial reaction was bad enough. But the subsequent escalation and the treason case against Bobi Wine suggests there’s more to the story than trigger happy soldiers.

And there is. Bobi Wine has been released on bail. This may draw a line under recent events — for now. But Museveni’s problems have only just begun, and run deep. He’s facing an increasingly agitated younger voter base, an erosion of the National Resistance Movement’s political model, and the growing prominence of social media in Uganda’s political life. All these factors will only grow over time.

Changing voter profile

In its first two decades of rule, the National Resistance Movement effectively operated as a single party under the “movement system”: all candidates were forced to stand as individuals rather than members of national political parties.

This legacy endures. The “individual” culture of local politics has continued since the National Resistance Movement became a political party in 2005. Its key constituents are rural voters who engage in politics mainly on local issues. They are also old enough to remember the horrific civil war that preceded Museveni’s tenure.

To these voters removing the president from power is a perilous, even traumatic idea. Ethnographic research we carried out in southern Uganda during the 2016 presidential election campaigns confirms this. It shows that most of Museveni’s voters aren’t simply coerced or bought off – they don’t want him replaced.

There is little reason to think that the old system is collapsing. Rather the problem for Museveni is that the number of those whose interests and identities it does not cater for is increasing.

This group includes younger voters. They have no memory of the war, have a relatively good education that has led them to want more than the agricultural livelihood of their parents, and stubbornly engage with politics on a national rather than local scale.

They’re not interested in replacing a local MP. They want a new president.

These voters have never been a key constituency for Museveni. Previously their political threat could be dismissed – there weren’t many of them, they were organisationally weak and concentrated in a few urban centres.

But the ground is shifting under the National Resistance Movement’s feet.

Young voters are now scattered across the country, including in the towns of Museveni’s rural southern heartland. The advent of social media makes it easier for them to network and communicate with each other. They can also get around more easily.

Most significantly, their numbers are rising fast. Uganda has one of the youngest populations in the world. Just over 48% of its population is 14 years and younger while one in five (21.16%) of the total population are aged between 15 and 24. Only 2% of the population is 65 years or older.

So the 36-year-old Bobi Wine is not a threat because he is saying something that no opposition leader has said before. It’s because he has, with considerable skill, positioned himself as a champion of this growing demographic.

Building a movement

Museveni likes to portray his opponents as either divisive tribalists or young hooligans – and worse. Bobi Wine is none of these, as proved by the erudite public letters he traded with Museveni after his 2017 election. He has built a wide platform defined by youth more than ethnicity, class, region or religion.

And, critically, a string of recent by-elections across the country (including Arua) have shown that this brand transcends his local constituency.

It’s no coincidence that Bobi Wine’s most recent run-in with the law actually happened five weeks earlier during a protest in Kampala against Uganda’s controversial new “social media tax” (during which the authorities accused him of inciting a riot).

In the period leading up to the Arua by-election Facebook, Instagram, Twitter, YouTube, and WhatsApp all saw a marked uptick in posts about Bobi Wine and his emerging constituency.

Social media has also played a central role after Arua. Images of Bobi Wine and the other opposition MPs’ alleged mistreatment in custody were circulated widely, exacerbating the popular unrest.

News of the general tumult also spread via social media to the Ugandan diaspora, resulting in rallies being held in Berlin, London, Washington DC, and elsewhere.

It was once possible to discuss opposition to Museveni in regional and ethnic terms. But, increasingly, opposition is a generational story. Whether the enduring face of this new politics is Bobi Wine or someone else, Ugandan politics is clearly changing.

Mystery of the cargo ships that sink when their cargo suddenly liquefies

August 29, 2018

Author

Susan Gourvenec

Professor of Offshore Geotechnical Engineering, University of Southampton

Disclosure statement

Susan Gourvenec is affiliated with the Southampton Marine and Maritime Institute at the University of Southampton that receives funding from a range of industry and government organisations related to the marine and maritime sectors.

Partners

University of Southampton provides funding as a member of The Conversation UK.

Think of a dangerous cargo and toxic waste or explosives might come to mind. But granular cargoes such as crushed ore and mineral sands are responsible for the loss of numerous ships every year. On average, ten “solid bulk cargo” carriers have been lost at sea each year for the last decade.

Solid bulk cargoes – defined as granular materials loaded directly into a ship’s hold – can suddenly turn from a solid state into a liquid state, a process known as liquefaction. And this can be disastrous for any ship carrying them – and their crew.

In 2015, the 56,000-tonne bulk carrier Bulk Jupiter rapidly sunk around 300km south-west of Vietnam, with only one of its 19 crew surviving. This prompted warnings from the International Maritime Organisation about the possible liquefaction of the relatively new solid bulk cargo bauxite (an aluminium ore).

A lot is known about the physics of the liquefaction of granular materials from geotechnical and earthquake engineering. The vigorous shaking of the earth causes pressure in the ground water to increase to such a level that the soil “liquefies”. Yet despite our understanding of this phenomenon, and the guidelines in place to prevent it occurring, it is still causing ships to sink and taking their crew with them.

Solid bulk cargoes

Solid bulk cargoes are typically “two-phase” materials as they contain water between the solid particles. When the particles can touch, the friction between them makes the material act like a solid (even though there is liquid present). But when the water pressure rises, these inter-particle forces reduce and the strength of the material decreases. When the friction is reduced to zero, the material acts like a liquid (even though the solid particles are still present).

A solid bulk cargo that is apparently stable on the quayside can liquefy because pressures in the water between the particles build up as it is loaded onto the ship. This is especially likely if, as is common practice, the cargo is loaded with a conveyor belt from the quayside into the hold, which can involve a fall of significant height. The vibration and motion of the ship from the engine and the sea during the voyage can also increase the water pressure and lead to liquefaction of the cargo.

When a solid bulk cargo liquefies, it can shift or slosh inside a ship’s hold, making the vessel less stable. A liquefied cargo can shift completely to one side of the hold. If it regains its strength and reverts to a solid state, the cargo will remain in the shifted position, causing the ship to permanently tilt or “list” in the water. The cargo can then liquefy again and shift further, increasing the angle of list.

At some point, the angle of list becomes so great that water enters the hull through the hatch covers, or the vessel is no longer stable enough to recover from the rolling motion caused by the waves. Water can also move from within the cargo to its surface as a result of liquefaction and subsequent sloshing of this free water can further impact the vessel’s stability. Unless the sloshing can be stopped, the ship is in danger of sinking.

The International Maritime Organisation have codes governing how much moisture is allowed in solid bulk cargo in order to prevent liquefaction. So why does it still happen?

The technical answer is that the existing guidance on stowing and shipping solid bulk cargoes is too simplistic. Liquefaction potential depends not just on how much moisture is in a bulk cargo but also other material characteristics, such as the particle size distribution, the ratio of the volume of solid particles to water and the relative density of the cargo, as well as the method of loading and the motions of the vessel during the voyage.

The production and transport of new materials, such as bauxite, and increased processing of traditional ores before they are transported, means more cargo is being carried whose material behaviour is not well understood. This increases the risk of cargo liquefaction.

Commercial agendas also play a role. For example, pressure to load vessels quickly leads to more hard loading even though it risks raising the water pressure in the cargoes. And pressure to deliver the same tonnage of cargo as was loaded may discourage the crew of the vessel draining cargoes during the voyage.

To tackle these problems, the shipping industry needs to better understand the material behaviour of solid bulk cargoes now being transported and prescribe appropriate testing. New technology could help. Sensors in a ship’s hold could monitor the water pressure of the bulk cargo. Or the surface of the cargo could be monitored, for example using lasers, to identify any changes in its position.

The challenge is developing a technology that is cheap enough, quick to install and robust enough to survive loading and unloading of the cargo. If these challenges can be overcome, combining data on the water pressure and movement of the cargo with information on the weather and the ship’s movements could produce a real-time warning of whether the cargo was about to liquefy.

The crew could then act to prevent the water pressure in the cargo rising too much, for example, by draining water from the cargo holds (to reduce water pressure) or changing course of the vessel to avoid particularly bad weather (to reduce ship motions). Or if that were not possible, they could evacuate the vessel. In this way, this phenomenon of solid bulk cargo liquefaction could be overcome, and fewer ships and crew would be lost at sea.

50 shades whiter: what you should know about teeth whitening

September 5, 2018

Author

Alexander Holden

Lecturer in Dental Ethics, Law and Professionalism, University of Sydney

Disclosure statement

Alexander Holden works as a dentist in general practice in addition to his listed academic appointment.

Partners

University of Sydney provides funding as a member of The Conversation AU.

The effect of teeth whitening was discovered quite by accident. In the past, dentists tried to treat gum disease with mouth rinses containing hydrogen peroxide. They noticed teeth became whiter over time following use of these mouthwashes.

In modern-day Australia, teeth whitening is offered by dentists, other dental practitioners and by cosmetic businesses on the high street. Many teeth-whitening products are also available over the counter for home application, including gels and strips. So which option is best and safest?

How do they work?

Teeth whitening has also been commonly called teeth bleaching, mainly because the active ingredient in most products is hydrogen peroxide (or products that release hydrogen peroxide when mixed with water or air).

Teeth whitening is somewhat controversial; different countries have different rules regarding the permitted concentrations of hydrogen peroxide released by products and who may provide these.

In Australia, only a dental practitioner may provide products that release more than 6% hydrogen peroxide. In New Zealand, non-dentists may apply up to 12% hydrogen peroxide to whiten teeth. In the UK, it’s illegal for anyone other than dentists to use concentrations higher than 0.1%.

Despite lay practitioners in New Zealand using far higher concentrations of hydrogen peroxide, we don’t really have any evidence of harm to the public from this difference in policy.

Dentists in Australia are able to use high concentrations of hydrogen peroxide. Some in-chair whitening systems use 35% hydrogen peroxide. At this concentration, hydrogen peroxide can effectively permeate deep into the enamel structure. Weaker concentrations act only at the surface of the tooth enamel.

While hydrogen peroxide is the active ingredient in most whitening products, some teeth-whitening gels contain carbamide peroxide or sodium perborate. Both of these agents break down to release hydrogen peroxide.

What’s the difference?

So what’s the difference between teeth whitening by a dentist, in a cosmetic setting, and do-it-yourself home kits?

Recently, the lines between these categories have blurred somewhat. Many dentists now offer teeth-whitening treatments that are then taken home and used by consumers. Non-dentists are also offering “in-chair” whitening treatments, often with products that require light activation. Both of these methods work by releasing hydrogen peroxide, but in-chair systems tend to use products that release higher levels of hydrogen peroxide, especially those used by dentists.

Lab-based research suggests in-chair whitening by dentists increases the strength of enamel, making it more resistant to erosion from acid. In contrast, home whitening was shown to increase the loss of mineral content within enamel, which over time may lead to weakness.

The researchers suggest home systems should be used under the supervision of a dentist. Whitening products bought over the counter, when used excessively, could lead to damage to teeth over time.

The main difference is dentists will take a mould of your teeth and use that to make a whitening tray. This ensures the treatment touches your teeth only and not your gums. It’s important hydrogen peroxide isn’t left in contact with gums for a long time as this can cause burns.

Many outlets offering teeth whitening claim to use “peroxide-free” products. Consumers should ask what these actually contain. Products might be free of peroxide before use, but then release hydrogen peroxide when activated.

Products that genuinely don’t contain or release hydrogen peroxide are unlikely to be very effective in whitening teeth.

Enamel that is bleached by DIY whitening products may be vulnerable to damage from abrasive toothpastes. Prolonged use of home whitening products may weaken the surface of the enamel, making it more vulnerable to acid damage or wear.

Once teeth have been whitened, you don’t have to keep on whitening them, but the effects will gradually fade over time. These usually last 6-12 months, depending on brushing and diet.

It doesn’t take too much searching to find a huge range of home remedies for teeth whitening. From rubbing banana peel on your teeth, to brushing with a mixture of lemon juice and bicarbonate of soda, there are lots of quick-fix teeth-whitening solutions. While many of these home remedies simply don’t work, many contain acids, sugars and powerful abrasives, which may lead to tooth damage and poorer dental health if used routinely.

Toothpastes that contain charcoal have increased in popularity in recent times. Some promote these products as beneficial for oral health and teeth whitening. However, a recent review in the Journal of the American Dental Association found insufficient evidence to support these claims.

Before you go

Before you undergo any course of teeth whitening, it would be a good idea to have a check-up to make sure your mouth is healthy. It’s quite common for teeth-whitening products to cause sensitivity. Usually this is temporary in effect. Identifying any dental health issues beforehand will reduce the risk of experiencing too many surprises.

One limitation of any type of whitening treatment is that dental restorations, such as tooth-coloured fillings, veneers and crowns (caps), won’t change colour, as the whitening only takes effect on natural teeth.

This can then result in a mismatch between the whitened natural teeth and any such restorations. It’s a factor to consider when having teeth whitened by someone without formal dental training as they might not be able to predictably identify which teeth will not whiten.

Treatment by dentists typically costs more, but comes with more assurances for patients. Dentists can use stronger products, are more likely to understand what is achievable with each type of whitening (office-based or home) and can also help more effectively if anything goes wrong.

https://www.sunburynews.com/wp-content/uploads/sites/48/2018/09/file-20180829-195331-1kvwznv.png

Staff & Wire Reports