Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Robotic Arena – January 12, 2019 – Wrocław, Poland
RoboDEX – January 16-18, 2019 – Tokyo, Japan
Let us know if you have suggestions for next week, and enjoy today’s videos.
Metamaterials seem like a technology out of science fiction. Because of the way these materials affect electromagnetic phenomena and physical attributes of materials, they can render objects invisible, leaving the observer in disbelief.
While invisibility cloaks are a gee-whiz application, metamaterials now offer real-world commercial applications such as new antenna technologies for mobile phones. To get to the point where metamaterials are not just a curiosity, but also a viable commercial technology, they have had to evolve a new set of tricks .
One example is the work of a team of researchers from Lawrence Livermore National Laboratory (LLNL) and the University of California San Diego (UCSD). They have used so-called mechanical metamaterials—which exhibit unique mechanical properties that do not exist in nature—to create a novel material that can change from rigid to flexible in response to a magnetic field. The researchers expect this new material could usher in new approaches to smart wearables and soft robotics.
In 2016, Australian scientists announced a matchstick-size brain implant that can be slipped underneath the skull via blood vessels—similar to how a pacemaker’s leads are eased into the heart. The stent electrode, or “stentrode,” recorded high-quality brain signals in freely-moving sheep for six months.
Now, the stentrode adds another ability to its arsenal: In addition to monitoring brain signals, the device can communicate with the brain using gentle pulses of electricity.
In a proof-of-concept study published this week in Nature Biomedical Engineering, the team used stentrodes to electrically stimulate the motor cortex of sheep brains, eliciting movement in the animal’s facial muscles and limbs.
That ability suggests the device could be used to perform deep brain stimulation (DBS) in humans, a form of direct electrical simulation shown to be a promising treatment for conditions such as Parkinson’s disease, depression, and epilepsy.
Implanting traditional DBS electrodes requires drilling a hole through the skull or removing a portion of it. The stentrode, on the other hand, is implanted into the brain by snaking a catheter underneath the skull via a vein in the neck. Pacemakers are similarly implanted in the heart in a procedure that takes about an hour and requires only local anesthetic—the patient is typically awake the whole time.
“Our technology potentially is an avenue to achieve deep brain stimulation without performing open brain surgery,” says Thomas Oxley, CEO and founder of Synchron, the Silicon Valley-based company developing the technology.
That could make DBS a more accessible and less-expensive treatment option for patients. And unlike some brain implants, the stentrode has caused no brain inflammation or rejection in studies so far.
In a side-by-side comparison in sheep, the stentrode stimulated brain tissue as well as a traditional implant. During the study, the team also discovered that the direction in which the electrode is facing inside the blood vessel can affect how much energy is required to stimulate the brain—an important piece of information to consider as the trials move into human studies.
In 2016, Oxley predicted the first human trials would begin in late 2017. That didn’t happen, and he now declines to put a date on the start of clinical trials. Like a pacemaker, the stentrode implant is permanent, which makes human trials a significant undertaking.
“The burden is on us to get the technology to a position where it’s really safe when we do that first [human] implant,” says Oxley. “We’re $17 million and 6 years into this program and only now getting really close to our first in-human trial.”
That first human trial, focused on safety, will enroll patients with paralysis, says Oxley. The company’s initial goal is to develop a brain-computer interface that would allow individuals with paralysis to mentally control devices such as wheelchairs, prosthetic limbs, or computers.
Eventually, the ability to stimulate the brain will be a valuable addition to a brain-computer interface, he adds. “An ideal brain-computer interface would contain a closed loop with a feedback circuit, so we could very quickly provide information back to the brain.”
“By offloading intensive processing tasks, such as fast Fourier transform (FFT) and modulation/coding, to USRP’s built-in FPGA, we increased the determinism, signal integrity, and reliability of the system whilst freeing up the host processor for data logging and simpler processing tasks such as visualising power spectrum and constellation diagrams.”
– David Grace, University of York, Professor
We needed to implement a cost-effective low-altitude aerial testbed that can verify novel wireless communications applications between the airborne node and ground terminals while meeting tight constraints for payload weight, volume and power consumption.
We combined the rapid prototyping capabilities of the LabVIEW Communications System Design Suite with the processing power of USRP RIO to directly drive tailored antenna elements in a highly flexible wireless testbed. The testbed is carried onboard a tethered aerial platform Helikite that can be deployed for hours, up to 400 meters altitude, which allows us to trial multiple applications.
Yi Chu – University of York, Communication Technologies Research Group
David Grace – University of York, Professor
Today, many people see access to broadband multimedia wireless services as a utility. At the beginning of 2017, 98% of the UK population had mobile coverage, driven by regulatory requirements that an operator covers at least 95% of the population. However, only 70% of the UK landmass is covered, as Figure 1 illustrates.
Figure 1. Mobile Coverage Across the Northern United Kingdom and North York Moors National Park
Remote areas of Scotland, British National Parks, and Areas of Outstanding Natural Beauty are underserved with poor quality or no Internet and, in many cases, no mobile coverage. This occurs for two main reasons: the cost of delivery and restrictions such as planning. The UK government has committed £440M to provide broadband to 600,000 rural homes (approximately £750 per home) . Extending 800 MHz wireless coverage to 99% of rural areas was anticipated to cost £270M in 2012 . Clearly, our present approaches are costly.
The lack of suitable wireless infrastructure creates a technological divide. Smart cities are increasingly receiving funding with creative ideas given for the Internet of Things (IoT), monitoring, and control. In rural areas with limited coverage, smart villages and farms are still a long way off, along with applications such as soil mapping, fertiliser and pest control, fleet management, and precision livestock farming, including sheep tracking, smart feeding and milking of cows .
High-altitude platforms (HAPs) can solve the civil planning and cost issues of serving such areas by filling coverage gaps and working alongside existing and future terrestrial infrastructures in the cities. HAPs, taken here to mean unmanned aircraft located at an altitude of 17 km–22 km, are poised to become reality within the next two years given several well-funded activities and advancements in the key enabling technologies of materials, battery and energy capture.
Tackling the Challenges
Building wireless base stations on HAPs, compared with terrestrial deployments, delivers many new challenges, including:
– Maintaining long-term operations in the stratosphere without constant power
– Ensuring aerial terminals coexist with the terrestrial networks without causing interference
– Utilising the limited wireless backhaul bandwidth between aerial access points and ground core networks
– Planning the cell coverage of the aerial base stations
The Communication Technologies Research Group at the University of York has been conducting research to tackle such challenges since 1999 through theoretical and practical work. In 2016, the university launched the Centre for High Altitude Platform Applications (CHAPA) to capitalise on this new generation of delivery platforms. It is important to have a readily accessible test facility for early prototyping and trials. CHAPA started conducting wireless experiments using a USRP-based solution that is attached to a 21 “m” ^”3″ Helikite – a tethered helium balloon that can carry a 10 kg payload up to 400 meters altitude. We used USRP (Universal Software Radio Peripheral) devices to drive our bespoke antenna elements for our various trials, programmed with the LabVIEW Communications Systems Design Suite.
We can use this floating testbed to efficiently prove novel designs. We can then extend these designs to deliver proof-of-concept payloads that we can deploy and test on the HAPs themselves, which currently have restricted availability and significantly higher running costs.
Prior to the announcement of LabVIEW Communications, we used the open source software package GNURadio with USRP for a few months. We started with zero experience with either software. However, the graphical user interface of LabVIEW Communications, the comprehensive tutorials on ni.com, and the NI technical support channels, provided us with a faster learning curve to implement our applications. Using LabVIEW Communications, we could also program the onboard FPGA without prior experience of VHDL or Verilog, which helped us to easily implement advanced baseband signal processing on the FPGA. By offloading intensive processing tasks, such as fast Fourier transform (FFT) and modulation/coding, to USRP’s built-in FPGA, we increased the determinism, signal integrity, and reliability of the system whilst freeing up the host processor for data logging and simpler processing tasks such as visualising power spectrum and constellation diagrams.
Operating USRP From Above
The main software defined radio (SDR) kits we used were the NI USRP-2943R and the Ettus Research USRP-N210. Both kits are connected to the ground-based host PCs through Ethernet (the USRP-2943R needs an SFP-Ethernet adapter). To ensure the USRP devices operated correctly while airborne, we had to account for the weight of the payload, power consumption and the connectivity to the host processor. We measured the voltage/current of each USRP device while running applications in the lab and equipped each with an appropriate battery while on the Helikite. To reduce weight and ensure the Helikite could fly safely, we removed the outer cases of some USRP devices.
Figure 2. Helikite, Trailer, and Payload
To ensure fast, secure connectivity between airborne USRP devices and the host PCs running LabVIEW Communications on the ground, we used fibre Ethernet. We developed a second winch (separate from the tether winch) to store the fibre, tensioning the fibre loosely on the tether, using a clutch and preventing the fragile fibre from stretching. The fibre Ethernet can achieve 1 Gb/s throughput, which can support multiple USRP devices operating at full duplex.
Figure 3. Helikite Operating at 400 m Altitude
With the testbed in place, we can operate a wide variety of experiments and application trials. Here are three examples:
Example 1. Measuring Elevation Angle Dependent Attenuation
It is always good to know the quality of radio coverage before deploying a base station. In contrast to the well-established terrestrial propagation models, aerial-terrestrial propagation still requires further exploration. We carried out a field trial to measure the signal propagation between aerial and terrestrial terminals.
Figure 4. Aerial-Terrestrial Propagation Measurement
The Helikite carried a USRP-N210 as the receiver whilst, on the ground, a trolley carried a USRP-2943R acting as a mobile transmitter (a transmitter-based experiment on the ground causes less potential interference to other users on the same band). We have measured several propagation scenarios at different elevation angles and distances including line-of-sight (LOS), non-LOS (NLOS) shadowed by buildings, partial LOS through trees, and rich reflection NLOS in residential areas. We are planning further field trials to collect more data to generate appropriate propagation models for each scenario.
Example 2. Improving Spectral Efficiency by Physical Layer Network Coding
Given the nature of HAPs (unlike our Helikite test platform), cabled connections to the terrestrial core network are unfeasible. So, the technologies that improve the efficiency of the limited wireless spectrum are particularly beneficial to aerial-terrestrial communications. Our EPSRC-funded NetCoM project has investigated improving backhaul/access link spectral efficiency by using physical layer network coding (PNC). PNC exploits the additive nature of the RF wave to compress the received signals from multiple users into coded data and uses appropriate side information to ensure desired data is recoverable at the destinations. We have tested a simple two-way-relay (TWR) channel using three USRP devices and the Helikite testbed.
Figure 5. TWR Channel
The TWR channel simulates the scenario where two users exchange data through a relay because the direct link between them does not exist. In the field trial, we have two USRP devices on the ground (as users) transmitting different pilot data simultaneously on the same carrier frequency to a USRP (as relay) on the Helikite. We calculate the bit-error-rate (BER) performance directly from the PNC-encoded superimposed signal. The results show that the aerial experiments have achieved similar BER performance as the indoor experiments (LOS existed in both scenarios), which indicate the possibility of applying PNC technology on aerial platforms.
Figure 6. Our LabVIEW Communications GUI displays the superimposed constellation at the Relay Node.
Example 3. Avoiding Interference by Beamforming
Delivering services using shared HAP/terrestrial spectrum will require highly dynamic HAP coverage along with the ability to specifically protect areas from HAP-generated interference. Phased array antennas on the HAP can help achieve tight control of interference, delivering electronically steerable beams can also track the mobility of the users to provide consistent quality of service (QoS).
Inspired by an NI case study from Imperial College, which discussed direction finding and beamforming, we extended the phased array antenna testbed to use the array to transmit steerable beams and tested the system both in the lab and on the Helikite.
We implemented a similar approach to calibrate the phases of the transmitted signals as Imperial College. As Figure 7 illustrates, we connected the TX/RX port of each USRP daughterboard to one antenna array element and connected the RX2 ports to the phase synchronisation tone generated by the same USRP through splitters. However, unlike Imperial College, we operated the USRP devices in full duplex mode by setting the TX/RX ports to transmit and the RX2 ports to receive. In this setup, the signal transmitted by TX/RX port couples into the RX2 port so the RX2 ports receive both the synchronisation tone and the tone sent by the TX/RX port. The two tones are on different frequencies so that they can be separated by a digital filter. We first correct the phase ambiguities across the RX2 ports of all daughterboards, and then calibrate the signals of TX/RX ports based on the observed signals at the phase synchronised RX2 ports. All the USRP devices driving the antenna array elements are connected to the common 10 MHz and 1 PPS reference signals.
Figure 7. Four-Element Phased Array on the Helikite
We have tested a four-element phased array with the fibre Ethernet backhaul on the Helikite using two USRP-2943R to drive the four array elements and one USRP-2943R to source the reference signals. One USRP-N210 generates the phase synchronisation tone to be distributed to four daughterboards through a splitter. To make sure the payload mass does not exceed the capability of the Helikite, we took off the case of the USRP-N210 to keep the payload weight just below 10 kg. Each USRP device is powered by one battery and all USRP devices are connected to one Ethernet adapter to allow the host PC on the ground to control them through the fibre backhaul, controlled by LabVIEW Communications.
During the field trial, we have clearly observed the phased array tracking the arriving signal direction from a moving source (one USRP-B210) on the ground, and the received signal power changes while steering the TX beam of the phased array to different directions.
We plan to expand our Helikite testbed by investing in a larger, 100 “m” ^”3″ , Helikite that can carry 30 kg payload, so we can complete larger scale experiments. The performances of our current applications are limited by the need to process data on the ground-based PC rather than the FPGA. The recently announced standalone USRP-2974 is equipped with an onboard real-time processor, which would allow us to take full advantage of the built-in FPGA for baseband signal processing, even when the device isn’t tethered to a PC. This would significantly improve the throughput of our applications and allow us to operate our testbed at higher altitudes. We look forward to testing this out soon.
S. Priestley, C. Baker, Superfast Broadband Coverage in the UK, House of Commons Briefing Paper CBP06643, 9 March 2017.
Real Wireless, Technical analysis of the cost of extending an 800 MHz mobile broadband coverage obligation for the United Kingdom, Technical Report, January 2012.
S. Romeo, Overview on Smart Farming, “Technology, Application Areas, Market Landscape and Entrepreneurship”, Beecham Research, 2015.
WASHINGTON — The Environmental Protection Agency acted again Thursday to ease rules on the sagging U.S. coal industry, this time scaling back what would have been a tough control on climate-changing emissions from any new coal plants.
The latest Trump administration targeting of legacy Obama administration efforts to slow climate change comes in the wake of multiplying warnings from the agency’s scientists and others about the accelerating pace of global warming.
In a ceremony Thursday at the agency, acting EPA administrator Andrew Wheeler signed a proposal to dismantle a 2015 rule that any new coal power plants include cutting-edge techniques to capture the carbon dioxide from their smokestacks.
Wheeler called the Obama rules “excessive burdens” for the coal industry.
“This administration cares about action and results, not talks and wishful thinking,” Wheeler said.
Asked about the harm that coal plant emission do people and the environment, Wheeler responded, “Having cheap electricity helps human health.”
Janet McCabe, an EPA air official under the Obama administration, and others challenged that. MaCabe in a statement cited the conclusion of the EPA’s own staff earlier this year that pending rollbacks on existing coal plants would cause thousands of early deaths from the fine soot and dangerous particles and gases.
The EPA was “turning its back on its responsibility to protect human health,” McCabe said Thursday.
Environmentalists, scientists and lawmakers were scathing, saying the Trump administration was undermining what they said should be urgent efforts to slow climate change.
The EPA and 12 other federal agencies late last month warned that climate change caused by burning coal, oil and gas already was worsening natural disasters in the United States. It would cause hundreds of billions of dollars in damage each year by the end of the century, the government’s National Climate Assessment said.
“This proposal is another illegal attempt by the Trump administration to prop up an industry already buckling under the powerful force of the free market,” Sen. Sheldon Whitehouse, a Rhode Island Democrat and member of the Senate Environment and Public Works Committee, said in a statement.
“Did the EPA even read the National Climate Assessment?” Whitehouse asked.
It’s unclear whether the new policy boost will overcome market forces that are making U.S. coal plants ever more unprofitable.
Competition from cleaner, cheaper natural gas and other rival forms of energy has driven down coal use in the United States to its lowest level since 1979, the Energy Information Administration said this week. This year will see the second-greatest number of U.S. closings of coal-fired power plants on record.
Senate Majority Leader Mitch McConnell, a Kentucky Republican, said the EPA’s action Thursday was “targeting another regulation that would have made it nearly impossible to build any new plants.”
Citing that and other Obama administration moves to tamp down emissions from coal-fired power plants in the national electrical grid, McConnell called the proposal “a crucial step toward undoing the damage and putting coal back on a level playing field.”
Other Trump administration initiatives rolling aback climate change efforts would undo an Obama plan intended to shift the national electrical grid away from coal and toward cleaner-burning solar and wind power, and would relax pending tougher mileage standards for cars and light trucks.
Jay Duffy, a lawyer with the Clean Air Task Force environmental nonprofit, called the level-playing field argument of the administration and its supporters “laughable.”
“In every rulemaking, they’re placing their thumbs on the scale to prop up coal, at the expense of public health and the environment,” Duffy said.
Speaking alongside Wheeler at a news conference, Michelle Bloodworth of the coal industry group America’s Power contended the new rollback could throw a lifeline to domestic coal-fired power producers.
“It does appear that this proposal would make it feasible for new coal plants” to be built, Bloodworth said.
(KATOWICE, Poland) — Divisions deepened at the U.N. climate talks Thursday, pitting rich nations against poor ones, oil exporters against vulnerable island nations, and those governments prepared to act on global warming against those who want to wait and see.
The stakes were raised by a scientific report that warned achieving the most ambitious target in the 2015 Paris climate accord to limit emissions is getting increasingly difficult. Fresh figures released this week showed that emissions of heat-trapping carbon dioxide jumped the highest in seven years, making the task of cutting those emissions one day to zero even more challenging.
Negotiators at the climate talks in Katowice, Poland, still disagree on the way forward but have just a few days to finish their technical talks before ministers take over.
“It’s going to be a big challenge,” said Amjad Abdulla, the chief negotiator for the Alliance of Small Island States. “We are going to forward the sticky issues to next week.”
Among the splits that need to be overcome before the conference ends on Dec. 14 are:
— The question of what kind of flexibility developing countries will have when it comes to reporting their emissions and efforts to curb them.
The issue is central to the Paris rulebook, which countries have committed to finalizing this year. Environmental activists insist that countries such as Brazil, with its vast Amazon rainforest, and China, the world’s biggest polluter, should have to provide hard data on emissions and not be treated like poorer nations who don’t have the ability to do a precise greenhouse tally.
Complicating matters, a group of rich countries that includes the United States and Australia is seeking similar leeway as developing nations.
— Several oil exporting countries have objected to the idea of explicitly mentioning ways in which global warming can be kept at 1.5 degrees Celsius (2.7 degrees Fahrenheit). The Intergovernmental Panel on Climate Change, a body made up of scientists from around the world, recently proposed “policy pathways” that would achieve this goal, which foresee phasing out almost all use of coal, oil and gas by 2050.
But Saudi Arabia and some of its allies say it would be wrong to cite those pathways in a text about future ambitions.
— Developing countries are frustrated that rich nations won’t commit themselves to providing greater assurances on financial support for poor nations facing hefty costs to fight the effects of climate change. European governments argue that they are bound by budget rules that limit their ability to allocate money more than a few years in advance.
What’s clear is that few countries are moving in the right direction to halt global warming.
“The first data for this year point to a strong rise in the global CO2 emissions, almost all countries are contributing to this rise,” said Corinne Le Quere, who led the team that published the emissions study this week.
“In China, it’s boosted by economic stimulation in construction. In the U.S., an unusual year, cold winter and hot summer, both boosting the energy demand. In Europe, the emissions are down but less than they used to be, and that’s because of growing emissions in transport that are offsetting benefits elsewhere,” she told the meeting in Katowice.
Le Quere, the director of the Tyndall Centre for Climate Change Research at the University of East Anglia in England, noted some positive news.
“We have renewable energy,” she said. “It is displacing coal in the U.S. and in Europe, and it is expanding elsewhere.”
“It’s not enough to meet the growing energy demand in developing countries in particular,” she said. “But the industry is growing.”
Host nation Poland, which depends on coal for 80 percent of its energy needs, is among those demanding help for workers in coal and gas industries who could lose their jobs as nations shift to cleaner energy.
In light of the deep divisions over how to best fight climate change, U.N. Secretary-General Antonio Guterres considering returning to Katowice to push for a strong declaration.
“It very much remains a possibility,” U.N. spokesman Stephane Dujarric said Thursday. “If he feels his presence will be useful, he will go back. But no decision has yet been made.”
Around the world, the deployment of 5G is well underway as providers build infrastructure and government agencies allocate spectrum for the next generation of wireless. In some countries, however, the going is getting tough. In Germany, for example, network operators and industry groups have harshly criticized the Federal Network Agency over its upcoming 5G spectrum auction.
At the heart of the problem is whether Germany’s Federal Network Agency—Bundesnetzagentur, or BNetzA for short—has unrealistic expectations about the kind of coverage providers can be expected to implement with the spectrum up for auction. One industry group, the Global System for Mobile Communications Association (GSMA), has argued that BNetzA’s spectrum licensing conditions could threaten Germany’s ability to develop a 5G network that will allow its manufacturing, industrial, and technology sectors to remain competitive.
The auction will allocate frequencies in the 2 gigahertz and 3.4 to 3.7 GHz bands in 5 MHz blocks. Operators will bid on individual blocks for specific geographic regions, a process that will continue over multiple rounds until there are no higher bids. The application period for the auction—which has been scheduled for spring 2019—opened on November 26, and interested bidders have until January 25, 2019 to submit an application. Each operator then has the exclusive right to use the 5 MHz chunks of frequency, in the specific regions, that they won.
In a statement to IEEE Spectrum, BNetzA said that mobile radio frequencies are allocated in a technology-neutral manner, and that the agency does not recommend specific mobile radio standards. But it’s possible that BNetzA’s tech agnostic attitude in using this spectrum to create a country-wide 5G network could backfire by encouraging operators to build out advanced 4G networks instead of true 5G networks.
BNetzA’s technology and standards agnosticism doesn’t seem to sit well with the GSMA. In one response to the BNetzA’s announcement that much of the C band (3.4-3.8 GHz) would be released for 5G infrastructure development in the upcoming auctions, the GSMA says the proposed coverage obligations “appear to disregard the laws of physics.”
In a follow up response, the GSMA explains—after claiming that the BNetzA’s license conditions could “poison” 5G investment—exactly what coverage obligations the industry group has problems with. For any operator possessing a license for a chunk of C band spectrum, BNetzA will obligate them to cover 98 percent of the population, as well as provide coverage along travel routes. That includes not only highways, but routes such as country roads, waterways, and railroads. All coverage must offer at least 100 megabits per second downlink and 10 millisecond latencies—though 5G networks are expected to provide much higher downlink rates, especially in dense urban areas. And providers must meet these coverage obligations by 2022.
BNetzA requires the same 100 MBps downlink and 10 ms latency for current 4G LTE networks. However, LTE’s lower frequency wavelengths lead to much easier propagation over long distances.
The trouble with the C band, like all high frequency signals, is how well it propagates. The band’s inability to penetrate walls, for example, is one of the biggest problems engineers face when developing and deploying 5G networks. Even in a wide-open space, signals in the C band don’t travel well after only a few kilometers because of moisture in the atmosphere. Rain only exacerbates the problem.
The network operators, which are members of the GSMA, are also arguing that the investment required to provide the amount of wide-area coverage using C band will vastly outpace the value gained. Deutsche Telekom, in a statement to IEEE Spectrum, said in part that “any regulation in the direction of mandatory national roaming is hostile to investment, especially for the supply of rural areas.” A spokesperson from Telefonica explained that BNetzA’s speed and area demands will likely result in widespread advanced 4G networks covering these areas, rather than true 5G networks.
Providers in countries around the world recognize that a 5G network requires multiple frequencies used in concert in order to succeed. Aside from C band spectrum, 600 MHz spectrum and millimeter waves (60 GHz) will play vital roles in providing extremely high data rates across wide areas.
Germany has been criticized in past years for communications infrastructure that lags behind the rest of Europe. In its efforts to build a robust, country-wide 5G network by requiring near-complete coverage of higher frequency bands, however, the BNetzA may find that the cost to implement it could continue to keep the country at the back of the 5G pack for some time to come.
The system, called AlphaZero, began its life last year by beating a DeepMind system that had been specialized just for Go. That earlier system had itself made history by beating one of the world’s best Go players, but it needed human help to get through a months-long course of improvement. AlphaZero trained itself—in just 3 days.
The research, published today in the journal Science, was performed by a team led by DeepMind’s David Silver. The paper was accompanied by a commentary by Murray Campbell, an AI researcher at the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y.
“This work has, in effect, closed a multi-decade chapter in AI research,” writes Campbell, who was a member of the team that designed IBM’s Deep Blue, which in 1997 defeated Garry Kasparov, then the world chess champion. “AI researchers need to look to a new generation of games to provide the next set of challenges.”
AlphaZero can crack any game that provides all the information that’s relevant to decision-making; the new generation of games to which Campbell alludes do not. Poker furnishes a good example of such games of “imperfect” information: Players can hold their cards close to their chests. Other examples include many multiplayer games, such as StarCraft II, Dota, and Minecraft. But they may not pose a worthy challenge for loing.
“Those multiplayer games are harder than Go, but not that much harder,” Campbell tells IEEE Spectrum. “A group has already beaten the best players at Dota 2, though it was a restricted version of the game; Starcraft may be a little harder. I think both games are within 2 to 3 years of solution.”
He calls multiplayer games a “good interim step,” adding that any game that included language would open up still greater realms of complexity. IBM famously tackled a television trivia game with its machine Watson, which won at Jeopardy in 2011. Watson later showed its mettle in academic debate. However, IBM is still working to adapt the system for use in healthcare.
AlphaZero is amazing in the sheer power it brings to game-playing. And this says much, given the extraordinary progress the old-fashioned methods had already made.
Deep Blue was a monster of a machine built solely to play chess, and its 1997 victory over Kasparov was not overwhelming. Today, though, even a smartphone can outplay Magnus Carlsen, the reigning world champion, and do so again and again:
But that smartphone is just a piker compared to the top conventionally programmed chess program, Stockfish. And Stockfish, in turn, is a piker next to AlphaZero, which crushed it after a mere 24 hours of self-training.
DeepMind developed the self-training method, called deep reinforcement learning, specifically to attack Go. Today’s announcement that they’ve generalized it to other games means they were able to find tricks to preserve its playing strength after giving up certain advantages peculiar to playing Go. The biggest such advantage is the symmetry of the Go board, which allows the early, specialized machine to calculate more possibilities by treating many of them as mirror images.
It was surprisingly easy to generalize the system. “They didn’t have to do much of anything,” marvels Campbell. “Instead of having a Go board as input and the Go rules directing the search, they said, ‘let’s have chessboard and chess rules.’ There was actually a significant debate over whether the approach would work for chess.”
The researchers have so far unleashed their creation only on Go, chess and Shogi, a Japanese form of chess. Go and Shogi are astronomically complex, and that’s why both games long resisted the “brute-force” algorithms that the IBM team used against Kasparov two decades ago.
Chess, however, had been the preferred test bed for AI for more than a lifetime, figuring in the research of such pioneers as Alan Turing, Claude Shannon, and Herbert Simon. The game appealed because it certainly seemed to involve thinking and because it was neither too hard (like poker) nor too easy (like checkers). Even so, chess turned out to be a hard nut to crack.
In 1957 Simon famously predicted that a machine would outplay the world chess champion “within 10 years,” and later he was gently mocked for being decades off the mark. But he complained that critics of AI dismissed all new advances as mere parlor tricks.
”That’s because they define thinking as that which computers can’t yet do,” Simon told me, back in 1998. They keep raising the bar.” He died three years later, but at least he lived to see Deep Blue’s victory over Kasparov.
Problems in life rarely come with all the information needed for their solution. That’s why an AI that can master any game of imperfect information might find application way beyond gaming, say in financial modeling, even war. A self-driving car equipped with such an AI might finally conquer the roads, producing wild success for whichever company first perfects the idea.
Maybe it’ll be Waymo, a branch of Alphabet and thus a sibling to DeepMind.
Unmanned aerial vehicles (UAVs), more commonly known as drones, are growing at a rapid rate for both consumer and professional markets.
Market research firm IHS Markit forecasts the professional drone market will manage a compound annual growth rate (CAGR) of 77.1% through 2020 driven by industries such as agriculture, energy and construction using the technology for surveying, mapping, planning and more. Meanwhile, the consumer drone market will maintain a CAGR of 22.1% through 2020 with companies such as DJI, Parrot and 3D Robotics driving the market with a wide range of devices for photography, recreational use and racing.
While these markets will be the main drivers for the next few years, one industry that isn’t discussed often as a main driver is the insurance market. However, according to professional services company PwC, the addressable market of drone powered solutions in the insurance industry is valued at $6.8 billion. This is mostly through three segments where drone operations can enhance an insurer’s procedures: risk monitoring, risk assessment and claims management.
Drones are being used by insurance firms for faster assessment of claims where one agent equipped with a UAV can set up automated flight patterns to cover multiple insured locations, capture images and evaluate property damage. Drones allow claims adjusters to get better views of hard-to-see areas and better analyze the cause of the loss — without disturbing the scene. This capability results in a savings of time and improved efficiency to the tune of 40% to 50%, according to services vendor Cognizant.
For example, drones were deployed to take pictures of the aftermath of a 2016 earthquake in Ecuador. One of the world’s leading reinsurers was able to respond to the catastrophe quickly and effectively, and sped up post-disaster relief and rebuilding through fast claims processing and payment. Because there is no need to wait until conditions are safe, claim resolution is much faster, and assessors and adjusters are safer.
Liberty Mutual has started using drones remotely controlled by a claims representative to do bird’s-eye inspections of the rooftops to damaged homes. The insurance company said it uses UAVs because they are safer than using a ladder and sending someone up to a roof. Liberty Mutual said the use of drones helps speed up the claims process with most inspections taking under 10 minutes. That means the faster it is to complete the inspection, the faster claims can be sent and repairs can be made to the home.
Travelers’ claims service is also employing drones for a similar use and has even brought the technology to its Claims University where it trains agents on how to operate UAVs and use them in the field. The insurance company is using drones to aid in property inspection associated with risk control, pre-loss or the claims process after a loss.
Better Data When Catastrophe Hits
Because drones can take detailed aerial imagery, when a catastrophe hits data can be easily collected for claims adjustment or catastrophe model validation purposes. UAVs can be used to cover wide areas for crop insurance claims or can be used to create a 3D model of major infrastructure damage caused by hurricanes and earthquakes. And since drones don’t require takeoff and landing strips, they can be used over properties that otherwise may be inaccessible to capture detailed images and videos without human risk.
This was the case with last year’s severe damage in the wake of Hurricane Harvey in Houston. Drones were used to inspect roadways, check railroad tracks, and assess water plant conditions, oil refineries and power lines. Some 100 drones were used after Hurricane Harvey to help a wide range of industries pinpoint damage and accelerate response times from insurance aggregators.
Lowering Pricing, Lowering Fraud
Insurance companies get consumers to purchase their policies based on the types of services they offer and the best prices that they can provide. Because insurance premiums are based on the level of risk, each feature a home has that reduces risk allows the insurer to calculate accurate personalized premiums. Insurance companies are using drones to collect information about a property before a disaster hits in order to formulate the best premium for that home.
For example, if a homeowner installs storm shutters in an area that experiences severe weather, a drone inspection that shows that a home has these features can justify a lower premium.
But drones can also be used to discover when a property does not have a feature in a home or building that the owner claims it does. Insurance fraud is a common problem and mitigating that fraud can help save companies millions of dollars. After an extreme event happens, some policy owners try to claim damage that was done prior to the disaster. Using drones to capture images of insured properties before an extreme event can help insurance companies protect against such fraud.
Not only are drones changing how insurance vendors mitigate their own risk; they also affect how quickly companies respond to problems when disaster hits, how fast they can process claims for policy owners and how fast claims are paid. While drone use is still in the nascent stages in the insurance industry, with these benefits to vendors and policy owners, the use of UAVs is bound to accelerate in the coming years.
Experts dream that one day much of the Internet of Things (IoT) will power itself. But the trickle of energy most prototype systems can gather from the environment through ambient heat, light, radio waves, or even the metabolism of bacteria don’t easily give you enough voltage to power today’s transistors.
One solution: ditch the transistors in favor of micrometer-scale mechanical switches. According to research presented this week at the IEEE International Electron Device Meeting, nanoelectromechanical (NEM) relays can switch using just 50 millivolts, that’s about 1/15th of what’s used on today’s processors.