What happens when you Google


Staff & Wire Reports



A cursor moves over Google's search engine page on Tuesday, Aug. 28, 2018, in Portland, Ore. Political leanings don’t factor into Google’s search algorithm. But the authoritativeness of page links the algorithm spits out and the perception of thousands of human raters do. (AP Photo/Don Ryan)

A cursor moves over Google's search engine page on Tuesday, Aug. 28, 2018, in Portland, Ore. Political leanings don’t factor into Google’s search algorithm. But the authoritativeness of page links the algorithm spits out and the perception of thousands of human raters do. (AP Photo/Don Ryan)


How Google search results work

By RYAN NAKASHIMA

AP Technology Writer

Tuesday, August 28

Political leanings don’t factor into Google’s search algorithm. But the authoritativeness of page links that the algorithm spits out and the perception of thousands of human raters do.

In a tweet early Tuesday, President Donald Trump called Google’s search results “rigged,” claiming that searches of “Trump news” only showed reporting by what he called the “Fake News Media.”

Google responded, saying in a statement, “We don’t bias our results toward any political ideology.”

Here’s a look at how Google returns results when you search for things, news, and even news about Trump.

WHAT GOOGLE’S BOTS DO

At its core, Google indexes the entire web — some hundreds of billions of pages — using programs called web crawlers. These bots collect descriptions of pages and their incoming links and save this information in Google’s data centers. When you search on Google, it scans this index — which is more than 100 million gigabytes large — to quickly provide what it thinks are the most relevant results.

Google knows the most popular search terms and, if you’re typing, offers to complete the words as you go.

WHAT HUMANS DO

Search results are created by an algorithm that has been fine-tuned to incorporate the reviews of some 10,000-plus employees commonly known as search quality raters.

These individuals follow a set of guidelines to judge the quality of search results, particularly when Google engineers are considering changes to the search algorithm.

Last year, Google engineers tweaked the search algorithm 2,400 times based on the results of more than 270,000 experiments, rater reviews and live user tests.

When it comes to judging the quality of the top news stories that Google displays, three major issues come into play, according to Google: Freshness, relevancy and authoritativeness. Google’s crawlers scan pages more frequently if they change regularly.

In the case of news sites, new stories can be added to the index within seconds of publication. Fresher stories will get bumped up in search results.

Results that are more relevant to a search tend to appear higher on the results page.

WHAT IS AUTHORITY?

Raters measure the authoritativeness, expertise and the trustworthiness of the sources that appear in search results. Google suggests that raters consider recommendations from professional societies and experts to determine a page’s authority.

Examples of high-quality news sources include ones that have won Pulitzer Prizes, that clearly label advertising as such, and that garner positive reviews from users. Pages that spread hate, cause harm or misinform or deceive users are given low ratings, Google says.

The guidelines tell raters to give a low ranking to pages “deliberately created to deceive users.” They provide an example of a source that “looks like a news source” but “in fact has articles to manipulate users in order to benefit a person, business, government or other organization politically, monetarily, or otherwise.”

Results for most people look the same, but Google results are heavily impacted by location, especially if you search for a physical location like a store. Users’ search history can also impact results slightly based on frequently conducted searches.

Opinion: Time to Dial Back Regulations on Informational Communications

By Mark Neeb

InsideSources.com

Federal Communications Commission Chairman Ajit Pai appeared recently before the Senate Committee on Commerce, Science and Transportation to discuss activities at the commission. As expected, public safety initiatives were on the agenda, as well as 5G innovation and increasing accessibility for all. However, one of the most important items discussed was reform to the Telephone Consumer Protection Act (TCPA), which affects the ability of thousands of businesses to communicate necessary information to consumers on their cell phones.

Congress created the TCPA in 1991 as a response to consumer outrage over increased, often scam telemarketing calls that had a way of always landing at dinnertime. But today, many of the bill’s provisions are vague and outdated, and fail to reflect developments in modern communication. FCC interpretations of the TCPA have also gone far beyond the intent of Congress to stymie abusive telemarketing and illegal scam calls, and as the D.C. Circuit Court recently recognized, have had an “eye-popping sweep” of all kinds of legitimate communication.

This confusion, coupled with the draconian liability associated with the TCPA, has unfairly punished legitimate businesses seeking to engage in critical communications with customers and employees, while bad actors continue to skirt laws and regulations, harass or scam consumers.

Over the years members of Congress, the FCC, courts and other regulatory agencies have acknowledged the need to provide clarity for TCPA compliance, but all have fallen short in their attempts to do so. Now, Pai has relaunched the FCC’s effort to address bad actors and create the best possible environment for consumers. In particular, the FCC is turning its attention to illegal and fraudulent robocalls.

This summer, it levied the largest fine ever for this type of illegal behavior against a Florida resident who made 90 million illegally spoofed robocalls. This activity is encouraging, but more can — and should — be done to address issues with TCPA and to draw greater distinctions between fraudulent actors and much-needed and sought legitimate business communications.

As Senate Commerce Committee members noted in their recent letter to Pai, updates to TCPA are desperately needed so that “legitimate businesses (can) stay in communication with consumers in a timely and effective manner.”

Similarly, the Small Business Administration Office of Advocacy noted, “In an environment where 50 (percent) to 70 (percent) of a business’ customers might only be reachable by mobile phone, it is important that the FCC move quickly to establish clear guidance on small-business compliance without depriving customers of required or desired communications.”

Even the Bureau of Consumer Financial Protection recently recognized in a letter to the FCC that “consumers benefit from communications with consumer financial products providers in many contexts.”

Currently, because of such onerous standards in how businesses can comply, consumers are being robbed of all kinds of vital information, such as calls from their medical providers, an alert that a package is at their front door, or even that their credit card has been breached or stolen.

While we are grateful that Pai discussed this issue in the recent hearing, including outlining the FCC’s “two-track approach” to regulation and enforcement in the space, the FCC needs to continue to focus on providing clarity to how these legitimate businesses can communicate with their own customers and other consumers. This clarification should provide concise rules for the use of innovative technologies to connect with consumers on their cell phone and via text message, which overwhelmingly is their preferred method of receiving information.

The FCC also should provide an appropriately tailored interpretation of what is considered an Automatic Telephone Dialing System, with the clarification that not all predicative dialers are ATDS. They should focus on clarifying that under TCPA rules, capacity means present ability, and explain that when human intervention is required for a call, that call is not made using an ATDS.

Furthermore, the FCC must delineate the clear boundaries between harmful, ineffective and unreasonable illegal robocalling, and those legitimate and responsible businesses that use new technologies to improve the lives of Americans every day by keeping them informed about events that affect their everyday lives. Schools, doctor’s offices, pharmacies and financial institutions are just a selection of the industries relying on the ability to communicate with consumers on their cell phones, but they are limited by vague and outdated interpretations of TCPA statutes.

It has long been time that TCPA reform was given adequate attention and modernization was discussed in the halls of Congress and throughout Washington, and we are encouraged by the discussion of this critical topic. Keeping TCPA stagnant will harm American families that need to be kept informed efficiently and conveniently throughout their busy lives, while doing nothing to eliminate the bad actors that continue to harm American consumers through illegal practices that are abusive, opaque and harmful to our nation’s overall wellbeing and productivity.

We urge the committee to take up this issue in future hearings, as well, and Chairman Pai and the FCC to continue with swift action on reform.

ABOUT THE WRITER

Mark Neeb is the CEO of ACA International. He wrote this for InsideSources.com.

Opinion: The Vietnamization of the War on Terror

By Bill O’Keefe

InsideSources.com

In Vietnam, President Lyndon B. Johnson and Secretary of Defense Robert McNamara knew in 1964 that the war in Vietnam could not be won, at least the way we think about winning wars. There is historical evidence that President John F. Kennedy would have started removing troops after he won re-election in 1964.

The problem is that neither did because they didn’t want the label of being soft on communism and the first president to lose a war. Johnson kept doubling down and finally Richard Nixon and Henry Kissinger concocted a way to bring the sad tragedy of Vietnam to an end at the cost of 58,220 U.S. military lives.

Now we are in the 17th year of the war on terror in Iraq and Afghanistan, and none of our wartime presidents has clearly defined when our troops will be brought home. Like in Vietnam, we seem to just keep on keeping on.

Although we have not lost anywhere near 58,000 troops, the nearly 14,000 troops and contractors who have been killed are too many. In addition to the deaths, there have been more than 52,000 wounded and 900,000 disability claims filed with the Department of Veteran Affairs.

In addition to casualties, the Congressional Research Service has put the cost of these two wars at $1.6 trillion in 2014. More comprehensive cost analyses — Brown University and CSIS for example — estimate that the total cost is between $4 trillion and $6 trillion. Whatever number is closest to being right, it is large and represents about 20 percent or more of our national debt. Not only does that add significantly to our fiscal problems but like Vietnam the burden of paying that cost has been shifted to future generations.

We went into Iraq to find weapons of mass destruction and to replace Saddam Hussein and into Afghanistan to retaliate for the 9-11 attacks. There were no weapons of mass destruction found and we mishandled the aftermath of deposing Hussein. Osama bin Laden was killed six years ago, and al-Qaeda and the Taliban either destroyed or seriously damaged. So, why do we still have 5,000 troops in Iraq and 15,000 in Afghanistan? What is the end game for the complete withdrawal of troops?

The official narrative is that our forces remain to help bring political stability to both countries. The real reason is that our most recent presidents are victims of the Vietnam syndrome. President Barack Obama came close to withdrawing troops from Afghanistan but got wobbly because he would be accused of losing the war on terror or appeasing terrorism.

In the case of both Iraq and Afghanistan, history demonstrates that goal of politically stability is an illusion. The Sunni and Shia populations have been in a state of conflict since about A.D. 700, and Afghanistan is a loose collection of warring tribes. The notion that we know how to promote a stable political environment is a triumph of hope over experience.

In her book “The March of Folly” that chronicled the pursuit of policy that was contrary to self-interest from Troy to Vietnam, Barbara Tuchman had this to say about Vietnam: “No one is the Executive Branch advocated withdrawal partly in fear of encouragement to Communism and damage to American prestige, partly in fear of domestic reprisals.”

And: “A last folly was the absence of reflective thought about nature of what we were doing, about effectiveness in relation to the object sought. … Absence of intelligent thinking in rulership is another of the universals, and raises the question whether in modern states there is something about political and bureaucratic life that subdues the functioning of intellect.”

Written 34 years ago, Tuchman prophetically described the situation that confronts us today.

ABOUT THE WRITER

Bill O’Keefe is the founder of Solutions Consulting in Midlothian, Va. He wrote this for InsideSources.com.

Opinion: The ‘I’ Factor

By Richard Williams

InsideSources.com

A provocative new study projects more car wrecks and worse food safety in the future because of climate change. According to the authors, higher temperatures are more hospitable to food-borne pathogens (like salmonella) and lead to poor driving. In addition, “exposure to hotter temperatures reduces the activity of two groups of regulators — police officers and food-safety inspectors — at times that the risks they are tasked with overseeing are highest.”

These are good things to consider. We should be thinking about the unexpected when it comes to environmental (and indeed any major) changes to the world we live in. But when you hear an alarming projection about the future, don’t forget about the “I” factor: human ingenuity, invention and innovation.

There is a key phrase in the study that many non-scientists overlook: “ceteris paribus.” That’s Latin for “all other things equal.” To predict risks 30 to 80 years in the future if nothing else changes is like saying, “You are driving toward a cliff and, ceteris paribus, you’ll die in a flaming car crash.” Perhaps you have time to turn or hit the brakes.

By necessity, researchers must work with the data they have — but that doesn’t change the fact that we can’t predict the world as it will be decades from now. Today’s traffic and food-safety challenges won’t necessarily be tomorrow’s.

Let’s start with traffic safety. Loup Ventures’ Gene Muster predicts we will begin to notice driverless cars by 2020, and his company estimates that 95 percent of new vehicles sold by 2040 will be fully autonomous. Because autonomous vehicles hold enormous promise to reduce accidents, this could turn the auto insurance, body shop and auto parts industries — which partly depend on more than 6 million crashes per year — on their heads, to say nothing of hospital emergency rooms, orthopedic surgeons and police.

In fact, many vehicles including long-haul and delivery trucks may not even have human passengers. So even if there is a crash, there’s no guarantee anyone will be at risk. Given that these cars and trucks will be programmed to obey the laws — which also might be quite different in 30 years — we may be far less reliant on police to manage our roads.

Now consider food safety. The Centers for Disease Control estimates around 48 million cases of food-borne disease each year (resulting in 3,000 deaths), which inadvertently highlights decades of failed FDA and USDA regulations — but technology might finally make some long-awaited breakthroughs.

Think about what already exists. Consumers can now receive alerts from smart refrigerators, meaning fewer people eating spoiled food and smaller outbreaks. Whole genome sequencing will help manufacturers identify and track pathogens. Quick Response bar codes (those black squares that look like a puzzle map) and blockchain technology are also poised to help trace problematic foods to their sources, which will also limit the size of outbreaks.

There will also likely be fewer food-borne disease outbreaks to begin with. Food companies that traditionally rely on pasteurization — essentially high temperature cooking — are experimenting with electromagnetic waves, electric currents and infrared heating. For fruits and vegetables eaten raw and other minimally processed foods, companies are experimenting with high pressure, electricity, ultraviolet light and irradiation.

Intelligent food packaging using nanotechnology is being designed to alert us to food-safety problems inside the package and even use antimicrobial sprays to deactivate pathogens. Even that tech may be obsolete as 3D food printers are now on the market — no, this isn’t science fiction — preparing what we need, when we need it, and eliminating the spoilage between creation and consumption.

Will all of these things come to pass? Probably not. But this is only a sampling of what we know about now. Had this op-ed been written in 1989, not quite 30 years ago, there would have been no World Wide Web with which to locate and categorize these technologies. What will tomorrow hold?

Dire predictions about problems decades out serve a useful purpose by alerting us to the serious challenges we will inevitably face. But ingenuity, invention and innovation will be on our side. Problems are instantly turned into market demands for which entrepreneurs naturally want to supply a solution. Human ingenuity nearly always finds a way. So, don’t panic.

ABOUT THE WRITER

Richard Williams is a senior affiliated scholar with the Mercatus Center at George Mason University and a former director for social sciences at the Center for Food Safety and Applied Nutrition in the Food and Drug Administration. He wrote this for InsideSources.com.

The Conversation

Detecting ‘deepfake’ videos in the blink of an eye

August 29, 2018

It’s actually very hard to find photos of people with their eyes closed.

Author

Siwei Lyu

Associate Professor of Computer Science; Director, Computer Vision and Machine Learning Lab, University at Albany, State University of New York

Disclosure statement

Siwei Lyu receives funding from NSF and DARPA.

Partners

University at Albany, State University of New York provides funding as a founding partner of The Conversation US.

A new form of misinformation is poised to spread through online communities as the 2018 midterm election campaigns heat up. Called “deepfakes” after the pseudonymous online account that popularized the technique – which may have chosen its name because the process uses a technical method called “deep learning” – these fake videos look very realistic.

So far, people have used deepfake videos in pornography and satire to make it appear that famous people are doing things they wouldn’t normally. But it’s almost certain deepfakes will appear during the campaign season, purporting to depict candidates saying things or going places the real candidate wouldn’t.

It’s Barack Obama – or is it?

Because these techniques are so new, people are having trouble telling the difference between real videos and the deepfake videos. My work, with my colleague Ming-Ching Chang and our Ph.D. student Yuezun Li, has found a way to reliably tell real videos from deepfake videos. It’s not a permanent solution, because technology will improve. But it’s a start, and offers hope that computers will be able to help people tell truth from fiction.

What’s a ‘deepfake,’ anyway?

Making a deepfake video is a lot like translating between languages. Services like Google Translate use machine learning – computer analysis of tens of thousands of texts in multiple languages – to detect word-use patterns that they use to create the translation.

Deepfake algorithms work the same way: They use a type of machine learning system called a deep neural network to examine the facial movements of one person. Then they synthesize images of another person’s face making analogous movements. Doing so effectively creates a video of the target person appearing to do or say the things the source person did.

How deepfake videos are made.

Before they can work properly, deep neural networks need a lot of source information, such as photos of the persons being the source or target of impersonation. The more images used to train a deepfake algorithm, the more realistic the digital impersonation will be.

Detecting blinking

There are still flaws in this new type of algorithm. One of them has to do with how the simulated faces blink – or don’t. Healthy adult humans blink somewhere between every 2 and 10 seconds, and a single blink takes between one-tenth and four-tenths of a second. That’s what would be normal to see in a video of a person talking. But it’s not what happens in many deepfake videos.

  • A real person blinks while talking.

  • A simulated face doesn’t blink the way a real person does.

When a deepfake algorithm is trained on face images of a person, it’s dependent on the photos that are available on the internet that can be used as training data. Even for people who are photographed often, few images are available online showing their eyes closed. Not only are photos like that rare – because people’s eyes are open most of the time – but photographers don’t usually publish images where the main subjects’ eyes are shut.

Without training images of people blinking, deepfake algorithms are less likely to create faces that blink normally. When we calculate the overall rate of blinking, and compares that with the natural range, we found that characters in deepfake videos blink a lot less frequent in comparison with real people. Our research uses machine learning to examine eye opening and closing in videos.

This gives us an inspiration to detect deepfake videos. Subsequently, we develop a method to detect when the person in the video blinks. To be more specific, it scans each frame of a video in question, detects the faces in it and then locates the eyes automatically. It then utilizes another deep neural network to determine if the detected eye is open or close, using the eye’ appearance, geometric features and movement.

We know that our work is taking advantage of a flaw in the sort of data available to train deepfake algorithms. To avoid falling prey to a similar flaw, we have trained our system on a large library of images of both open and closed eyes. This method seems to work well, and as a result, we’ve achieved an over 95 percent detection rate.

This isn’t the final word on detecting deepfakes, of course. The technology is improving rapidly, and the competition between generating and detecting fake videos is analogous to a chess game. In particular, blinking can be added to deepfake videos by including face images with closed eyes or using video sequences for training. People who want to confuse the public will get better at making false videos – and we and others in the technology community will need to continue to find ways to detect them.

A cursor moves over Google’s search engine page on Tuesday, Aug. 28, 2018, in Portland, Ore. Political leanings don’t factor into Google’s search algorithm. But the authoritativeness of page links the algorithm spits out and the perception of thousands of human raters do. (AP Photo/Don Ryan)
https://www.sunburynews.com/wp-content/uploads/sites/48/2018/09/web1_121248611-e76e674a39e1462aae14f4ca8d967020.jpgA cursor moves over Google’s search engine page on Tuesday, Aug. 28, 2018, in Portland, Ore. Political leanings don’t factor into Google’s search algorithm. But the authoritativeness of page links the algorithm spits out and the perception of thousands of human raters do. (AP Photo/Don Ryan)

Staff & Wire Reports