Medtronic co-founder who created wearable pacemaker dies
Monday, October 22
MINNEAPOLIS (AP) — Earl Bakken, an electronics repairman who created the first wearable external pacemaker and co-founded one of the world’s largest medical device companies, Medtronic, has died. He was 94.
Bakken, who also commercialized the first implantable pacemaker in 1960, died Sunday at his home in Hawaii, Medtronic said in a statement. It didn’t give a cause of death.
Bakken and his brother-in-law, Palmer Hermundslie, formed Medtronic in 1949 and turned it from a struggling company they ran out of the Hermundlie family’s Minneapolis garage into a multinational medical technology powerhouse.
“The contributions Earl made to the field of medical technology simply cannot be overstated,” said Medtronic’s chairman and CEO, Omar Ishrak. “His spirit will live on with us as we work to fulfill the mission he wrote nearly 60 years ago — to alleviate pain, restore health, and extend life.”
Bakken, who led the company for 40 years, was fitted for his own pacemaker in 2001 and a replacement in 2009.
One of the men who followed Bakken as chief executive, Harvard management professor Bill George, said Bakken made sure that Medtronic’s future leaders followed the company’s original values, which are laid out in its mission statement.
“He was a remarkable human being, a visionary 25 years ahead of his time,” George told the Star Tribune. “He was a graduate of the University of Minnesota, the pioneer of one of our strongest industries, and really stood for all the values that Minnesota stands for.”
Bakken and Hermundslie, who was married to the sister of Bakken’s wife at the time, formed Medtronic to repair and modify hospital equipment. The company mixed fixing TVs and selling other companies’ medical devices with its most important work: custom-made medical devices.
In 1958, University of Minnesota heart surgeon Dr. C. Walton Lillehei asked Bakken to make a battery-powered pacemaker that could keep babies with irregular heartbeats alive. Until then, patients with irregular heartbeats had to plug their cumbersome external devices into wall outlets, limiting their movement and leaving them susceptible to power outages, according to the company.
Bakken delivered his device to the university’s animal lab for testing and was stunned to see it attached to one of Lillehei’s pediatric patients the next day.
Medtronic has 86,000 employees worldwide. Its operational headquarters are still in the Minneapolis area but a few years ago, it moved its corporate headquarters to Dublin, where it would benefit from Ireland’s lower corporate tax rate.
Bakken is survived by his wife, Doris J. Bakken, his sister, several children, grandchildren and great-grandchildren.
We tested women and men for breast cancer genes – only 18 percent knew they had it
October 17, 2018
Author: Michael Murray, Professor of Genetics and Director for Clinical Operations in the Center for Genomic Health, Yale University
Disclosure statement: Michael Murray received funding from Regeneron, InVitae, and Merck in the past.
There are diseases and health conditions that are essentially invisible to us until it is too late.
When those problems are life-threatening, such as cancer, and if there is a period when something could be done, then those are instances where an effective screening strategy could prevent illness and save lives.
Once an individual tests positive for one of the “breast cancer” genes, called BRCA1 and BRCA2, then screening – mammograms and MRI – and prevention – surgery and medicines – can be used to reduce risk of disease and improve outcomes. In this case the risk is not just for breast cancer; these genes also raise the risk for ovarian, prostate and pancreatic cancer.
Perhaps just as important as the actions to encourage screening and prevention in the person who gets the news, is the opportunity to alert family members. Parents, siblings and adult children all have a 50-50 risk for the carrying the same gene change and might benefit from knowing of the positive screening results. That’s because when one family member tests positive all close relatives should also consider testing. With BRCA1 and BRCA2 women have higher personal cancer risk, but all have increased cancer risks.
I helped lead a recent study of 50,000 people in Pennsylvania whose DNA had been collected and tested for disease-causing versions of the breast cancer genes BRCA1/2. We discovered that some 267 people carried such genes. But what surprised us was that only 18 percent of these people were already aware of this.
This study highlights the fact that our current approaches to finding individuals with the cancer risk associated with BRCA1 and BRCA2 miss the majority of individuals who carry those genes, and further research into the use of DNA-based screening is needed if we are to address the missed opportunities to intervene.
The value of genetic testing
There are lots of ways to screen. As part of standard primary care, physicians check blood pressure and recommend carrying out colonoscopies on nearly everyone over 50 years old. High blood pressure is invisible, but if not addressed can lead to increased strokes and heart attacks. Polyps in the colon are not visible without a colonoscopy or similar study, and if not removed early some will progress to colon cancer.
In 2013 I left Harvard Medical School to join Geisinger Health System in Pennsylvania, where we saw a unique opportunity to explore ways to use genetic testing in standard primary care to improve the health of the average person. In early 2014, Geisinger announced a research collaboration with Regeneron Pharmaceuticals to work together on discovering disease-causing genes and new targets for drug therapies. This project relied on tens of thousands of patient volunteers choosing to link their electronic health records to their DNA code for research. This allowed researchers to carefully study the combined DNA code and health care records to identify harmful genetic changes that raised the risk of particular disorders, and as well as protective genetic changes that lowered them.
Among the Geisinger leadership team, a group of us recognized the opportunity to give something back to these volunteers by using DNA code as the basis for a screening program. The screening was designed to identify genetic variations in the volunteer’s DNA that might increase their risk of disease and that health care providers could do something about, and then deliver a report to the patient and their physician. In May of 2015 we delivered our first result, and there is now a well-structured system set up to deliver care based on this kind of result to thousands of Geisinger patients who volunteered for the project. Now at Yale, I co-authored the recent paper that reported results from one of the most important projects from my time at Geisinger and is a template for some the work we will do here in Connecticut.
Over the last 20 years, screening strategies to identify risk related to disease-associated changes in the breast cancer genes BRCA1 and BRCA2 have been developed. Those strategies have focused on the predictive value of family history of cancer.
A lot of careful research has proven that a strong family history of cancer greatly increases the likelihood of a related person having disease causing variations in one of their genes, and the brave public stories of affected people has helped us all to appreciate that connection.
We also know that family histories don’t always get discussed in enough detail to alert someone that they need to get tested. Then there are those who have DNA changes that increase their risk of disease but are unaware because they fortunately don’t have the family history.
New findings with BRCA1/BRCA2 screening
Our study showed that of 50,000 people in Pennsylvania whose DNA had been tested as part of the MyCode project for BRCA1/2 genes, 267 people carried such genes but only 18 percent were already aware of this. The other 82 percent were learning it for the first time through the DNA-based screening strategy that we applied.
Breast cancer survivor Alicia Cook holds a letter from the University of Chicago informing her that test results showed she had the BRCA genetic defect linked to breast cancer. AP Photo/M. Spencer Green
It’s important to note that not all of these 267 individuals will develop cancer. However, among the volunteers who had died prior to the completion of the study, almost half had developed a BRCA-associated cancer.
In contrast, far fewer of those alive at the end of the study had developed a BRCA1- or BRCA2-associated cancer, suggesting that we have opportunities to intervene with those we have identified.
Early warning provides the chance to apply proven cancer prevention and early detection strategies for those who are now aware of their risk. While long-term follow-up is needed to document how successful this is, we have already detected a number of early cancers that we hope will result in better outcomes for those individuals.
In the decades ahead, medical researchers will work out the details of using DNA-based screening to understand risk, not only for cancer, but for all kinds of health risks. That information can be used to improve people’s lives by prevention of cancer, heart disease and other diseases. Although we are in the early days still, there are several prominent organizations, including the U.S National Academy of Medicine, which are gathering expert groups figure out what it will take to apply what we know now to DNA-based screening programs for large groups of people, perhaps even the entire U.S. population some day.
Large demonstration projects are needed to get a clearer picture of what works best. We need to determine what is the ideal age to do the screening, which set of genes to screen, what are the costs of such large programs and what are the long-term risks and benefits.
Ultimately however, there should be little doubt that preventive health care management in the decades ahead will routinely use DNA-based screening to detect otherwise invisible risks that can be addressed to improve individual and populations health.
Congress takes first steps toward regulating artificial intelligence
October 19, 2018
Author: Ana Santos Rutschman, Assistant Professor of Law, Saint Louis University
Disclosure statement: Ana Santos Rutschman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Some of the best known examples of artificial intelligence are Siri and Alexa, which listen to human speech, recognize words, perform searches and translate the text results back into speech. But these and other AI technologies raise important issues like personal privacy rights and whether machines can ever make fair decisions. As Congress considers whether to make laws governing how AI systems function in society, a congressional committee has highlighted concerns around the types of AI algorithms that perform specific – if complex – tasks.
Often called “narrow AI,” these devices’ capabilities are distinct from the still-hypothetical general AI machines, whose behavior would be virtually indistinguishable from human activity – more like the “Star Wars” robots R2-D2, BB-8 and C-3PO. Other examples of narrow AI include AlphaGo, a computer program that recently beat a human at the game of Go, and a medical device called OsteoDetect, which uses AI to help doctors identify wrist fractures.
As a teacher and adviser of students researching the regulation of emerging technologies, I view the congressional report as a positive sign of how U.S. policymakers are approaching the unique challenges posed by AI technologies. Before attempting to craft regulations, officials and the public alike need to better understand AI’s effects on individuals and society in general.
Concerns raised by AI technology
Based on information gathered in a series of hearings on AI held throughout 2018, the report highlights the fact that the U.S. is not a world leader in AI development. This has happened as part of a broader trend. Funding for scientific research has decreased since the early 2000s. In contrast, countries like China and Russia have boosted their spending on developing AI technologies.
As illustrated by the recent concerns surrounding Russia’s interference in U.S. and European elections, the development of ever more complex technologies raises concerns about the security and privacy of U.S. citizens. AI systems can now be used to access personal information, make surveillance systems more efficient and fly drones. Overall, this gives companies and governments new and more comprehensive tools to monitor and potentially spy on users.
Even though AI development is in its early stages, algorithms can already be easily used to mislead readers, social media users or even the public in general. For instance, algorithms have been programmed to target specific messages to receptive audiences or generate deepfakes, videos that can appear to present a person, even a politician, saying or doing something they never actually did.
Of course, like many other technologies, the same AI program can be used for both beneficial and malicious purposes. For instance, LipNet, an AI lip-reading program created at the University of Oxford, has a 93.4 percent accuracy rate. That’s far beyond the best human lip-readers, who have an accuracy rate between 20 and 60 percent. This is great news for people with hearing and speech impairments. At the same time, the program could also be used for broad surveillance purposes, or even to monitor specific individuals.
AI technology can be biased, just like humans
Some uses for AI may be less obvious, even to the people using the technology. Lately, people have become aware of biases in the data that powers AI programs. This has the potential to clash with generalized perceptions that a computer will impartially use data to make objective decisions. In reality, human-built algorithms will use imperfect data to make decisions that reflect human bias. Most crucially, the computer decision may be presented as, or even believed to be, fairer than a decision made by a human – when in fact the opposite may be true.
For instance, some courts use a program called COMPAS to decide whether to release criminal defendants on bail. However, there is evidence that the program is discriminating against black defendants, incorrectly rating them as more likely to commit future crimes than white defendants. Predictive technologies like this are becoming increasingly widespread. Banks use them to determine who gets a loan. Computer analysis of police data purports to predict where criminal activity will occur. In many cases, these programs only reinforce existing bias instead of eliminating it.
As policymakers begin to address the significant potential – for good and ill – of artificial intelligence, they’ll have to be careful to avoid stifling innovation. In my view, the congressional report is taking the right steps in this regard. It calls for more investment in AI and for funding to be available to more agencies, from NASA to the National Institutes of Health. It also cautions legislators against stepping in too soon, creating too many regulatory hurdles for technologies that are still developing.
More importantly, though, I believe people should begin looking beyond the metrics suggesting that AI programs are functional, time-saving and powerful. The public should start broader conversations about how to eliminate or lessen data bias as the technology moves on. If nothing else, adopters of algorithmic technology need to be made aware of the pitfalls of AI. Technologists may be unable to develop algorithms that are fair in measurable ways, but people can become savvier about how they work, what they’re good at – and what they’re not.
Robert J Kolker is a Friend of The Conversation: Congress regulating AI and automation? Congress can’t even regulate itself. What are we doing here? Are we having the Butlerian Jihad a la Dune?
Even As Voter Registration Soars, Voter Suppression Lives On
Is your state working to increase accessibility to the ballot box, or to disenfranchise its citizens?
By Robert P. Alvarez | October 23, 2018
Highly charged midterm elections are just around the corner, and experts are predicting record-high midterm voter turnout. But millions of U.S. citizens are being systematically inhibited — either blatantly or covertly — from casting votes this November.
Voter suppression is real, and it’s very likely happening in your state. Your fellow Americans — and maybe you — are being denied the most fundamental right citizens of a democratic republic have: the right to elect those who govern. If that doesn’t have you up in arms, it should.
One state with a particularly expansive history of voter suppression is Florida, where one out of five African-American adults can’t vote due to disenfranchisement.
This November, Floridians will vote on whether to restore the right to vote to 1.5 million people affected by permanent felony disenfranchisement. Doing so would send a powerful message to the rest of the country, as Florida accounts for nearly half of the U.S.’s permanently disenfranchised population.
Meanwhile, a different mechanism of voter suppression threatens the legitimacy of the governor’s race in Georgia, where candidate for governor — and current secretary of state — Brian Kemp is reportedly behind the stalling of 53,000 voter applications. Among those, 70 percent belong to black voters.
Kemp is being sued by civil rights lawyers for allegedly violating voter protection laws with his “exact match” voter verification method, an excessively strict voter ID requirement that seems to disproportionately disqualify nonwhite voters. And while Kemp claims to be “protecting the integrity of elections,” he’s heard in leaked audio from one of his recent campaign events — obtained by Rolling Stone — fretting that Georgians “exercising their right to vote” could hurt his campaign.
Other forms of suppression are even more obvious.
For example, North Dakota’s state legislature passed a law blatantly targeting Native Americans. It required voter IDs containing a residential address. Native American reservations in North Dakota issue IDs with P.O. boxes rather than residential addresses, and legislators knew it.
Despite its discriminatory nature, attempts to challenge the law have failed. The Supreme Court upheld it, making voting as a Native American in North Dakota distinctly more difficult than voting as a non-Native. And while the progressive website Daily Kos was able to raise $100,000 to help cover the costs of new IDs, it shouldn’t have to come to that.
There are plenty of other examples of voter suppression as well, most of them disproportionately affecting people of color and low-income communities. It’s high time we do away with policies and practices designed to disempower certain populations politically.
And look, it isn’t all doom and gloom.
There are innovative policies being implemented around the country that make registering to vote easier, bypassing some of the more common forms of voter suppression.
One such policy is automatic voter registration — enacted by 13 states and the District of Columbia — which automatically registers voters upon renewal of their driver’s license. In Vermont’s case, this has led to an absolutely staggering 92.5 percent voter registration rate.
Additionally, over a dozen states and D.C. authorized pre-registration for youth under 18; 36 states and D.C. authorized online voter registration; and 15 states and D.C. authorized same day registration.
Policies like these simplify the voting process and increase voter turnout. Plainly, we need more of them. In the words of the late, great Rev. Dr. Martin Luther King Jr., “give us the ballot.”
Robert P. Alvarez is a communications assistant at the Institute for Policy Studies. Distributed by OtherWords.org.