Astronomical moon discovery


SCIENCE

Staff & Wire Reports



This illustration provided by Dan Durda shows the exoplanet Kepler-1625b with a hypothesized moon. On Thursday, Oct. 4, 2018, two Columbia University researchers reported their results that the potential exomoon would be the size of Neptune or Uranus. The exoplanet, about 8,000 light-years away, is as big as Jupiter. (Dan Durda via AP)

This illustration provided by Dan Durda shows the exoplanet Kepler-1625b with a hypothesized moon. On Thursday, Oct. 4, 2018, two Columbia University researchers reported their results that the potential exomoon would be the size of Neptune or Uranus. The exoplanet, about 8,000 light-years away, is as big as Jupiter. (Dan Durda via AP)


Have astronomers found 1st moon outside our solar system?

By MARCIA DUNN

AP Aerospace Writer

Wednesday, October 3

CAPE CANAVERAL, Fla. (AP) — Astronomers may have found the first moon outside our solar system, a gas behemoth the size of Neptune.

Plenty of planets exist beyond our solar system, but a moon around one of those worlds has yet to be confirmed. Two Columbia University researchers presented their tantalizing evidence for a moon Wednesday.

The potential moon would be considerably larger than Earth — about the size of Neptune or Uranus. The planet it orbits is as big as mammoth Jupiter. This apparent super-size pairing of a gaseous moon and planet is 8,000 light-years away.

Researchers Alex Teachey and David Kipping evaluated 284 planets outside our solar system that had already been discovered by NASA’s Kepler Space Telescope. Only one planet held promise for hosting a moon, one around the star known as Kepler-1625, which is about the size of our sun but older.

So last October, the pair directed the Hubble Space Telescope at the star in an attempt to verify — or rule out — the possibility of a moon orbiting the planet Kepler-1625b. They were on the lookout for a second temporary dimming of starlight. The main dip in stellar brightness would be the planet itself crossing in front of its star. Another dip could well be a moon — known as an exomoon outside our solar system.

The more powerful and precise Hubble telescope detected a second and smaller decrease in starlight 3 ½ hours after the planet passed in front of the star — “like a dog following its owner on a leash,” as Kipping put it. The observation period, however, ended before the moon could complete its transit. That’s why the astronomers need another look with Hubble, hopefully next spring.

Despite the evidence, Teachey stressed “we are urging caution here.”

“The first exomoon is obviously an extraordinary claim and it requires extraordinary evidence,” Teachey said. “Furthermore, the size we’ve calculated for this moon, about the size of Neptune, has hardly been anticipated and so that, too, is reason to be careful here.”

He added: “We’re not cracking open Champagne bottles just yet on this one.”

If indeed a moon, it would be about 2 million miles (3 million kilometers) from its planet and appear twice as big in its sky, as the moon does in ours. The astronomers are uncertain how this potential moon might have formed, given its size.

“If confirmed, this finding could completely shake up our understanding of how moons are formed and what they can be made of,” NASA’s science mission chief Thomas Zurbuchen said in a statement.

According to the researchers, another compelling piece of evidence in favor of a moon is that the planet passed in front of its star more than an hour earlier than predicted. A moon could cause that kind of an uncertain, wobbly path, they noted.

Kipping said that’s how the Earth and moon would appear from far away. This particular planet — or exoplanet — is about the same distance from its star as Earth is to the sun.

Another planet could cause the same gravitational nudge, the researchers noted, although Kepler observations have come up empty in that regard. Kepler-1625b is the only planet found so far around this star.

For Teachey and Kipping, the best and simplest explanation is that Kepler-1625b has a moon.

“We’ve tried our best to rule out other possibilities,” Kipping told reporters. “But we were unable to find any other single hypothesis which can explain all of the data we have.”

Their findings were published in the journal Science Advances . The journal’s deputy editor, Kip Hodges, praised the researchers for their cautious tone, given the difficult and complicated process of identifying an exomoon.

“If this finding stands up to further observational scrutiny, it represents a major milestone in the field of astronomy,” Hodges said.

The Columbia astronomers said they may be able to clinch this as early as next year, with more Hubble viewing. In the meantime, they’re encouraging other scientists to join in and embracing the scrutiny that’s sure to come.

Whether confirmed or not, the subject offers insight into how rare — or how common — our own solar system might be.

Moons are abundant in our own solar system, with close to 200. Of the eight planets in our solar system, only Mercury and Venus have none.

Given that both the planet and its potential moon are gas giants, no one is suggesting conditions that might support life.

“But going forward, I think we’re opening the doors to finding worlds like that,” Teachey said.

The Associated Press Health & Science Department receives support from the Howard Hughes Medical Institute’s Department of Science Education. The AP is solely responsible for all content.

The Conversation: Academic rigor, journalistic flair

50 years old, ‘2001: A Space Odyssey’ still offers insight about the future

October 3, 2018

Author

Daniel N. Rockmore

Professor, Department of Mathematics, Computational Science, and Computer Science, Dartmouth College

Disclosure statement

Daniel N. Rockmore is Associate Dean for the Sciences, Director of the Neukom Institute for Computational Science, and Professor of Mathematics and Computer Science at Dartmouth College. He is also on the Science Steering Committee of The Santa Fe Institute and a member of its External Faculty.

Watching a 50th anniversary screening of “2001: A Space Odyssey,” I found myself, a mathematician and computer scientist whose research includes work related to artificial intelligence, comparing the story’s vision of the future with the world today.

The movie was made through a collaboration with science fiction writer Arthur C. Clarke and film director Stanley Kubrick, inspired by Clarke’s novel “Childhood’s End” and his lesser-known short story “The Sentinel.” A striking work of speculative fiction, it depicts – in terms sometimes hopeful and other times cautionary – a future of alien contact, interplanetary travel, conscious machines and even the next great evolutionary leap of humankind.

The most obvious way in which 2018 has fallen short of the vision of “2001” is in space travel. People are not yet routinely visiting space stations, making unremarkable visits to one of several moon bases, nor traveling to other planets. But Kubrick and Clarke hit the bull’s-eye when imagining the possibilities, problems and challenges of the future of artificial intelligence.

What can computers do?

A chief drama of the movie can in many ways be viewed as a battle to the death between human and computer. The artificial intelligence of “2001” is embodied in HAL, the omniscient computational presence, the brain of the Discovery One spaceship – and perhaps the film’s most famous character. HAL marks the pinnacle of computational achievement: a self-aware, seemingly infallible device and a ubiquitous presence in the ship, always listening, always watching.

HAL is not just a technological assistant to the crew, but rather – in the words of the mission commander Dave Bowman – the sixth crew member. The humans interact with HAL by speaking to him, and he replies in a measured male voice, somewhere between stern-yet-indulging parent and well-meaning nurse. HAL is Alexa and Siri – but much better. HAL has complete control of the ship and also, as it turns out, is the only crew member who knows the true goal of the mission.

Ethics in the machine

The tension of the film’s third act revolves around Bowman and his crewmate Frank Poole becoming increasingly aware that HAL is malfunctioning, and HAL’s discovery of these suspicions. Dave and Frank want to pull the plug on a failing computer, while self-aware HAL wants to live. All want to complete the mission.

The life-or-death chess match between the humans and HAL offers precursors of some of today’s questions about the prevalence and deployment of artificial intelligence in people’s daily lives.

First and foremost is the question of how much control people should cede to artificially intelligent machines, regardless of how “smart” the systems might be. HAL’s control of Discovery is like a deep-space version of the networked home of the future or the driverless car. Citizens, policymakers, experts and researchers are all still exploring the degree to which automation could – or should – take humans out of the loop. Some of the considerations involve relatively simple questions about the reliability of machines, but other issues are more subtle.

The actions of a computational machine are dictated by decisions encoded by humans in algorithms that control the devices. Algorithms generally have some quantifiable goal, toward which each of its actions should make progress – like winning a game of checkers, chess or Go. Just as an AI system would analyze positions of game pieces on a board, it can also measure efficiency of a warehouse or energy use of a data center.

But what happens when a moral or ethical dilemma arises en route to the goal? For the self-aware HAL, completing the mission – and staying alive – wins out when measured against the lives of the crew. What about a driverless car? Is the mission of a self-driving car, for instance, to get a passenger from one place to another as quickly as possible – or to avoid killing pedestrians? When someone steps in front of an autonomous vehicle, those goals conflict. That might feel like an obvious “choice” to program away, but what if the car needs to “choose” between two different scenarios, each of which would cause a human death?

Under surveillance

In one classic scene, Dave and Frank go into a part of the space station where they think HAL can’t hear them to discuss their doubts about HAL’s functioning and his ability to control the ship and guide the mission. They broach the idea of shutting him down. Little do they know that HAL’s cameras can see them: The computer is reading their lips through the pod window and learns of their plans.

In the modern world, a version of that scene happens all day every day. Most of us are effectively continuously monitored, through our almost-always-on phones or corporate and government surveillance of real-world and online activities. The boundary between private and public has become and continues to be increasingly fuzzy.

The characters’ relationships in the movie made me think a lot about how people and machines might coexist, or even evolve together. Through much of the movie, even the humans talk to each other blandly, without much tone or emotion – as they might talk to a machine, or as a machine might talk to them. HAL’s famous death scene – in which Dave methodically disconnects its logic links – made me wonder whether intelligent machines will ever be afforded something equivalent to human rights.

Clarke believed it quite possible that humans’ time on Earth was but a “brief resting place” and that the maturation and evolution of the species would necessarily take people well beyond this planet. “2001” ends optimistically, vaulting a human through the “Stargate” to mark the rebirth of the race. To do this in reality will require people to figure out how to make the best use of the machines and devices that they are building, and to make sure we don’t let those machines control us.

The Conversation

Brewing a great cup of coffee depends on chemistry and physics

September 27, 2017

Author

Christopher H. Hendon

Assistant Professor of Computational Materials and Chemistry, University of Oregon

Disclosure statement

Christopher H. Hendon does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Partners

University of Oregon provides funding as a member of The Conversation US.

Coffee is unique among artisanal beverages in that the brewer plays a significant role in its quality at the point of consumption. In contrast, drinkers buy draft beer and wine as finished products; their only consumer-controlled variable is the temperature at which you drink them.

Why is it that coffee produced by a barista at a cafe always tastes different than the same beans brewed at home?

It may be down to their years of training, but more likely it’s their ability to harness the principles of chemistry and physics. I am a materials chemist by day, and many of the physical considerations I apply to other solids apply here. The variables of temperature, water chemistry, particle size distribution, ratio of water to coffee, time and, perhaps most importantly, the quality of the green coffee all play crucial roles in producing a tasty cup. It’s how we control these variables that allows for that cup to be reproducible.

How strong a cup of joe?

Besides the psychological and environmental contributions to why a barista-prepared cup of coffee tastes so good in the cafe, we need to consider the brew method itself.

We humans seem to like drinks that contain coffee constituents (organic acids, Maillard products, esters and heterocycles, to name a few) at 1.2 to 1.5 percent by mass (as in filter coffee), and also favor drinks containing 8 to 10 percent by mass (as in espresso). Concentrations outside of these ranges are challenging to execute. There are a limited number of technologies that achieve 8 to 10 percent concentrations, the espresso machine being the most familiar.

There are many ways, though, to achieve a drink containing 1.2 to 1.5 percent coffee. A pour-over, Turkish, Arabic, Aeropress, French press, siphon or batch brew (that is, regular drip) apparatus – each produces coffee that tastes good around these concentrations. These brew methods also boast an advantage over their espresso counterpart: They are cheap. An espresso machine can produce a beverage of this concentration: the Americano, which is just an espresso shot diluted with water to the concentration of filter coffee.

All of these methods result in roughly the same amount of coffee in the cup. So why can they taste so different?

When coffee meets water

There are two families of brewing device within the low-concentration methods – those that fully immerse the coffee in the brew water and those that flow the water through the coffee bed.

From a physical perspective, the major difference is that the temperature of the coffee particulates is higher in the full immersion system. The slowest part of coffee extraction is not the rate at which compounds dissolve from the particulate surface. Rather, it’s the speed at which coffee flavor moves through the solid particle to the water-coffee interface, and this speed is increased with temperature.

A higher particulate temperature means that more of the tasty compounds trapped within the coffee particulates will be extracted. But higher temperature also lets more of the unwanted compounds dissolve in the water, too. The Specialty Coffee Association presents a flavor wheel to help us talk about these flavors – from green/vegetative or papery/musty through to brown sugar or dried fruit.

Pour-overs and other flow-through systems are more complex. Unlike full immersion methods where time is controlled, flow-through brew times depend on the grind size since the grounds control the flow rate.

The water-to-coffee ratio matters, too, in the brew time. Simply grinding more fine to increase extraction invariably changes the brew time, as the water seeps more slowly through finer grounds. One can increase the water-to-coffee ratio by using less coffee, but as the mass of coffee is reduced, the brew time also decreases. Optimization of filter coffee brewing is hence multidimensional and more tricky than full immersion methods.

Other variables to try to control

Even if you can optimize your brew method and apparatus to precisely mimic your favorite barista, there is still a near-certain chance that your home brew will taste different from the cafe’s. There are three subtleties that have tremendous impact on the coffee quality: water chemistry, particle size distribution produced by the grinder and coffee freshness.

First, water chemistry: Given coffee is an acidic beverage, the acidity of your brew water can have a big effect. Brew water containing low levels of both calcium ions and bicarbonate (HCO₃⁻) – that is, soft water – will result in a highly acidic cup, sometimes described as sour. Brew water containing high levels of HCO₃⁻ – typically, hard water – will produce a chalky cup, as the bicarbonate has neutralized most of the flavorsome acids in the coffee.

Ideally we want to brew coffee with water containing chemistry somewhere in the middle. But there’s a good chance you don’t know the bicarbonate concentration in your own tap water, and a small change makes a big difference. To taste the impact, try brewing coffee with Evian – one of the highest bicarbonate concentration bottled waters, at 360 mg/L.

The particle size distribution your grinder produces is critical, too.

Every coffee enthusiast will rightly tell you that blade grinders are disfavored because they produce a seemingly random particle size distribution; there can be both powder and essentially whole coffee beans coexisting. The alternative, a burr grinder, features two pieces of metal with teeth that cut the coffee into progressively smaller pieces. They allow ground particulates through an aperture only once they are small enough.

There is contention over how to optimize grind settings when using a burr grinder, though. One school of thought supports grinding the coffee as fine as possible to maximize the surface area, which lets you extract the most delicious flavors in higher concentrations. The rival school advocates grinding as coarse as possible to minimize the production of fine particles that impart negative flavors. Perhaps the most useful advice here is to determine what you like best based on your taste preference.

Finally, the freshness of the coffee itself is crucial. Roasted coffee contains a significant amount of CO₂ and other volatiles trapped within the solid coffee matrix: Over time these gaseous organic molecules will escape the bean. Fewer volatiles means a less flavorful cup of coffee. Most cafes will not serve coffee more than four weeks out from the roast date, emphasizing the importance of using freshly roasted beans.

One can mitigate the rate of staling by cooling the coffee (as described by the Arrhenius equation). While you shouldn’t chill your coffee in an open vessel (unless you want fish finger brews), storing coffee in an airtight container in the freezer will significantly prolong freshness.

So don’t feel bad that your carefully brewed cup of coffee at home never stacks up to what you buy at the café. There are a lot of variables – scientific and otherwise – that must be wrangled to produce a single superlative cup. Take comfort that most of these variables are not optimized by some mathematical algorithm, but rather by somebody’s tongue. What’s most important is that your coffee tastes good to you… brew after brew.

Strong and Safe

City of Columbus-Office of the Mayor

A Message from Mayor Ginther

Residents deserve a safe place to live, and no neighborhood should be an illegal dumping ground. We rolled out a plan in July to address this issue in Columbus. Many departments have been working together to implement it.

In August, 311 received a number of complaints about the appalling conditions of a Hilltop property. The front yard was piled with trash and discarded furniture, and the backyard had an industrial-size dumpster overflowing.

Code Enforcement Officer James Kohlberg issued a notice of the conditions, then proceeded to use every means possible to serve the owners. After the landlords still didn’t comply, Code Enforcement worked with the City Attorney’s Office. Under threat of jail time, they finally complied.

This is a great example of the collaborative effort being made through Refuse in Public Service in holding landlords and property owners accountable.

Another landlord failed to make necessary repairs to four of his properties, even after having criminal charges filed against him last year. He has now been sentenced to 178 days in jail.

The City Attorney also recently filed the largest public nuisance lawsuit on record against a landlord with an extensive history of continued violations. The Proactive Code Enforcement Team previously has issued notices of civil penalties to impose $1,000 daily, but that threat wasn’t enough. So the City Attorney has gone to court to seek payment which currently stands at $75,000 – and counting.

We are serious about making our neighborhoods safe. We know that piles of trash can be magnets for crime. We recently replaced 300-gallon dumpsters in alleyways in Hilltop with 90-gallon bins in front of properties when feasible. And we are committed to cutting back plants in alleyways and around streetlights to shine the light on areas that were otherwise obscured. Our plan to fight illegal dumping also includes hot spot mapping, alley cameras and light duty police officers to assist with investigations.

Since we unveiled the plan, calls to 311 with information on troubled sites have increased, and we appreciate our residents’ involvement. The city is working hard – collaboratively – to make our neighborhoods safe and strong.

Farm bill expiration fails young people nationwide

By Cody Smith, codys@cfra.org, Center for Rural Affairs

The farm bill is the primary method in which our government invests in rural communities across the nation – without it, our farmers would be uncertain and our communities without access to food and other crucial resources. Congress’ failure to pass a new bill, or an extension of the existing legislation, has left young people in rural areas with anxiety for the future.

Expiration leaves our nation’s beginning farmers and ranchers without access to resources they need to be successful in an aging profession. In 2017, the average age of a farmer was 58 years old. As these farmers hang up their boots, young people must be equipped to handle this transition.

Serious challenges like diminished access to land and capital, an absence of trusted networks, and limited knowledge of available resources can, and are, blocking their entry into the industry.

Before the farm bill expired – taking away funding and the USDA’s authority to administer it – the Beginning Farmer and Rancher Development Program helped budding agriculturalists overcome these barriers. The Senate draft farm bill offered a new approach, the Farming Opportunities Training and Outreach Program, that would make a mandatory investment in the next generation. But, lawmakers failed to negotiate and pass a final 2018 farm bill in time.

A farm bill expiration is disheartening to young people across the U.S. because it sends a message that Congress failed to support them. Letting these programs expire shows young people that Congress has other priorities – none of which include the next generation of farmers.

Established in 1973, the Center for Rural Affairs is a private, non-profit organization working to strengthen small businesses, family farms and ranches, and rural communities through action oriented programs addressing social, economic, and environmental issues.

The Conversation

Charities take digital money now – and the risks that go with it

October 3, 2018

Authors

Philip Hackney

Associate Professor of Law, University of Pittsburgh

Brian Mittendorf

Fisher Designated Professor of Accounting and Chair, Department of Accounting & Management Information Systems (MIS), The Ohio State University

Disclosure statement

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Partners

The Ohio State University provides funding as a founding partner of The Conversation US.

University of Pittsburgh provides funding as a member of The Conversation US.

Many large charities, despite being entrusted with accepting and managing funds that benefit the public, are accepting bitcoin and other cryptocurrencies – volatile forms of digital money – as donations.

Take, for example, the Silicon Valley Community Foundation, one of the country’s largest charities. It held a third of its US$13.5 billion investments – nearly $4.5 billion – in digital assets according to its annual financial statement.

As experts in the tax and financial issues charities face, we have spent considerable time examining what got nonprofits dabbling in digital currencies in the first place and what could go wrong as a result.

Volatile juncture

While the Silicon Valley Community Foundation probably holds more cryptocurrency than any other charity, it is not unique. Fidelity Charitable, the top U.S. charity in terms of the donations it amasses annually, said digital money was its fastest-growing category in 2017.

Smaller charities are also accepting donations in bitcoin, as are new charities focused solely on cryptocurrency. The giving of digital money surged in 2017, just as the market for these newfangled assets boomed.

Bitcoin, the most common cryptocurrency, rose by 1,318 percent against the U.S. dollar in 2017.

XRP, the second-most popular kind of digital money, gained 36,018 percent over the course of the year.

These gains gave way to massive losses in the first eight months of 2018, when digital currencies plunged more sharply than the dot-coms crashed in the early 2000s.

Some charities that received massive cryptocurrency donations in 2017 may not have been able to convert them into regular money before they lost much of their value the next year. Silicon Valley Community Foundation, for example, disclosed in its 2017 audit report that for more than 45 percent of its investment assets, restrictions would prevent them from being converted to cash at any point in 2018.

The fact that charities only disclose their financial data once a year means that the scale of their at-risk wealth, as of now, is unknown.

Appreciated assets

Why would charities accept digital money in the first place? The answer has to do with changes in philanthropy in recent years.

An increasing share of charitable giving is coming from a small group of extremely wealthy donors as the percentage of Americans who donate to nonprofits declines.

And mega-donors don’t always give charities money. Instead, they pass along assets, such as stocks, bonds and bitcoin. That approach to giving benefits them in two ways. To see why, it helps to understand how these transactions work.

Say a wealthy couple gives stock in a company that they bought at $1 per share. This was such a good investment that those shares are now worth $1,000 each. Upon donating, the couple gets a $1,000 deduction on that year’s tax return with another bonus: never having to pay taxes on the $999 gain in the value of that donated stock as income. That $1,000 can then offset the income tax on $1,000 of wages. Had the couple sold the stock and donated the same $1,000, however, the donation would merely offset the gain from the sale of the stock.

This opportunity means that many wealthy Americans benefit from donating assets that have become more valuable over time rather than by simply giving money.

Consider how the top executives of Facebook have supported causes.

As a company that has not paid dividends, the wealth accrued to Facebook’s biggest shareholders is held in the form of stock that has gained value over time.

It should be no shock, then, that the company’s leaders like CEO Mark Zuckerberg and Chief Operating Officer Sheryl Sandberg have sought to donate some of their Facebook stock to charity rather than giving cash.

Zuckerberg and his wife Priscilla Chan have given Facebook stock worth more than $1.75 billion to a donor-advised fund, essentially a charitable savings account that on a large scale operates like a foundation without having to follow the rules that foundations must observe, at the Silicon Valley Community Foundation since 2010. The couple gave another nearly $2 billion of their Facebook shares to a foundation associated with their Chan Zuckerberg Initiative in 2017.

For her part, Sheryl Sandberg donated over $100 million in Facebook stock in both 2016 and 2017.

Facebook’s major shareholders are by no means the only ones using this strategy. GoPro, Apple and Microsoft executives have also moved massive amounts of their stock into charitable accounts.

The urge to donate digital money, especially once the meteoric rise in the value of bitcoin and similar assets created the opportunity for investors to reap huge potential charitable tax breaks, was only natural.

Charitable middlemen

The nonprofits that get these donations, meanwhile, need money they can spend on salaries, rent and other expenses.

And some financial assets are hard for charities to accept and turn into regular money. Your local food bank, for instance, may not know what do with a stake in a private equity fund or Ethereum cryptocurrency if your rich neighbor gave it a donation in one of those ways rather than writing a check or using their credit card. That has led to the rise of a new kind of middleman with specialized expertise.

Fidelity Charitable got 61 percent of its donations in assets other than cash in 2017. Other prominent donor-advised fund sponsors saw a similar result. Schwab Charitable obtained over 70 percent of its 2017 donations in non-cash assets. In the last month of the year, that figure was 80 percent for Vanguard Charitable.

These fast-growing charities bring a key skill: harvesting capital gains. That is, they accept tax-advantaged donations, hold onto that wealth, and – in most cases – transfer the money derived from those assets to the donor’s charities of choice when the donor asks.

Fidelity, in fact, advertises its ability to accept privately held business interests and other assets that are not publicly traded. Schwab similarly notes its ability to convert such things as stakes in private companies, fine art and bitcoin.

The Silicon Valley Community Foundation is emblematic of this phenomenon taking hold in more traditional charitable organizations. It has begun marketing itself as a repository for complex assets like stock held in companies before they go public and digital currencies.

The consequences

To be sure, there is nothing improper about these practices. Giving investment assets gradually became commonplace after the tax code established the favorable treatment of donations to charitable organizations in 1917. The growth in giving digital money can be traced to the IRS ruling in 2014 that the government sees it as a form of investment property.

However, many of these assets are extremely volatile. As the upswings of 2017 and downturns in 2018 demonstrated, digital money’s value is subject to big changes. This can be a problem when donors give away assets right before their value crashes – or a boon when those gifts precede a sharp rise.

Either way, wealthy donors get tax deductions. And when gifts precede crashes, it can take more tax revenue out of government coffers than charities get in theirs.

This illustration provided by Dan Durda shows the exoplanet Kepler-1625b with a hypothesized moon. On Thursday, Oct. 4, 2018, two Columbia University researchers reported their results that the potential exomoon would be the size of Neptune or Uranus. The exoplanet, about 8,000 light-years away, is as big as Jupiter. (Dan Durda via AP)
https://www.sunburynews.com/wp-content/uploads/sites/48/2018/10/web1_121498965-191224372f4f47f899739c74d0ab63bc.jpgThis illustration provided by Dan Durda shows the exoplanet Kepler-1625b with a hypothesized moon. On Thursday, Oct. 4, 2018, two Columbia University researchers reported their results that the potential exomoon would be the size of Neptune or Uranus. The exoplanet, about 8,000 light-years away, is as big as Jupiter. (Dan Durda via AP)
SCIENCE

Staff & Wire Reports