Arsenal says ‘no’ to Ceballos

first_imgDani Ceballos wants to break his assignment in Arsenal, although it is very complicated that this situation can come to fruition. The English team has informed Real Madrid and also the soccer player that it is not for the work of letting him out in this winter market. The Andalusian does not count for Arteta (he has not given him any minute since he arrived), although the club leaders consider the season to be very long and can be an important footballer in some section of it. The situation is very complex because the decision to leave for the London team was exclusively from Ceballos. It was a request from Unai Emery, who was dismissed, and Madrid was limited to respecting the decision of a football player who did not count for Zidane. The utrerano negotiated with Arsenal his new conditions and closed his relationship until next June 30. So, from the white club they consider it a situation that must be resolved in one way or another between Dani and Arsenal. It is a different situation from that of Jesús Vallejo, who has broken his assignment with Wolverhampton. In his case, it was from the hand of Madrid, which recommended him to go there because it was a request from Nuno. There is still a market week, although Arsenal does not seem to work to change its decision. Madrid, given the situation, does not intend to pressure a club with which it maintains an excellent relationship. They are worried about the situation of Ceballos, although it was his decision to go to London and it is he who must try to win a position there. Valencia, pendingCeballos, meanwhile, is very concerned about the European Championship. He always said he was looking for minutes because he wants to play that championship and with the change of coach (Arteta instead of Emery) everything has taken a radical turn. He has gone from being an important player to not playing a single minute. A circumstance that, if maintained until the end of the course, would leave him without going to the Eurocup … Given this perspective, Valencia has been interested in the possible transfer of the Utrera footballer. Albert Celades Perfectly knows the virtues of Ceballos and Mestalla would be a destination of the player’s liking.last_img read more

Continue reading

A turtle—or a rifle Hackers easily fool AIs into seeing the wrong

first_img Country * Afghanistan Aland Islands Albania Algeria Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia, Plurinational State of Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, the Democratic Republic of the Cook Islands Costa Rica Cote d’Ivoire Croatia Cuba Curaçao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Holy See (Vatican City State) Honduras Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Korea, Democratic People’s Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People’s Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, the former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Martinique Mauritania Mauritius Mayotte Mexico Moldova, Republic of Monaco Mongolia Montenegro Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Norway Oman Pakistan Palestine Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Qatar Reunion Romania Russian Federation Rwanda Saint Barthélemy Saint Helena, Ascension and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Martin (French part) Saint Pierre and Miquelon Saint Vincent and the Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Sint Maarten (Dutch part) Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and the South Sandwich Islands South Sudan Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan Tajikistan Tanzania, United Republic of Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States Uruguay Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Vietnam Virgin Islands, British Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe With the help of stickers, image recognition algorithms were tricked into thinking a stop sign was a speed limit sign. A turtle—or a rifle? Hackers easily fool AIs into seeing the wrong thing Using imperceptible elements, adversarial attacks duped image recognition algorithms into thinking a 3D-printed turtle was a rifle. Sign up for our daily newsletter Get more great content like this delivered right to you! Country By Matthew HutsonJul. 19, 2018 , 2:15 PM STOCKHOLM—Last week, here at the International Conference on Machine Learning (ICML), a group of researchers described a turtle they had 3D printed. Most people would say it looks just like a turtle, but an artificial intelligence (AI) algorithm saw it differently. Most of the time, the AI thought the turtle looked like a rifle. Similarly, it saw a 3D-printed baseball as an espresso. These are examples of “adversarial attacks”—subtly altered images, objects, or sounds that fool AIs without setting off human alarm bells.Impressive advances in AI—particularly machine learning algorithms that can recognize sounds or objects after digesting training data sets—have spurred the growth of living room voice assistants and autonomous cars. But these AIs are surprisingly vulnerable to being spoofed. At the meeting here, adversarial attacks were a hot subject, with researchers reporting novel ways to trick AIs as well as new ways to defend them. Somewhat ominously, one of the conference’s two best paper awards went to a study suggesting protected AIs aren’t as secure as their developers might think. “We in the field of machine learning just aren’t used to thinking about this from the security mindset,” says Anish Athalye, a computer scientist at the Massachusetts Institute of Technology (MIT) in Cambridge, who co-led the 3D-printed turtle study.Computer scientists working on the attacks say they are providing a service, like hackers who point out software security flaws. “We need to rethink all of our machine learning pipeline to make it more robust,” says Aleksander Madry, a computer scientist at MIT. Researchers say the attacks are also useful scientifically, offering rare windows into AIs called neural networks whose inner logic cannot be explained transparently. The attacks are “a great lens through which we can understand what we know about machine learning,” says Dawn Song, a computer scientist at the University of California, Berkeley. Some of these assaults use knowledge of the target algorithms’ innards, in what’s called a white box attack. The attackers can see, for instance, an AI’s “gradients,” which describe how a slight change in the input image or sound will move the output in a predicted direction. If you know the gradients, you can calculate how to alter inputs bit by bit to obtain the desired wrong output—a label of “rifle,” say—without changing the input image or sound in ways obvious to humans. In a more challenging black box attack, an adversarial AI has to probe the target AI from the outside, seeing only the inputs and outputs. In another study at ICML, Athalye and his colleagues demonstrated a black box attack against a commercial system, Google Cloud Vision. They tricked it into seeing an invisibly perturbed image of two skiers as a dog.AI developers keep stepping up their defenses. One technique embeds image compression as a step in an image recognition AI. This adds jaggedness to otherwise smooth gradients in the algorithm, foiling some meddlers. But in the cat-and-mouse game, such “gradient obfuscation” has also been one-upped. In one of the ICML’s award-winning papers, Carlini, Athalye, and a colleague analyzed nine image recognition algorithms from a recent AI conference. Seven relied on obfuscated gradients as a defense, and the team was able to break all seven, by, for example, sidestepping the image compression. Carlini says none of the hacks took more than a couple days.A stronger approach is to train an algorithm with certain constraints that prevent it from being led astray by adversarial attacks, in a verifiable, mathematical way. “If you can verify, that ends the game,” says Pushmeet Kohli, a computer scientist at DeepMind in London. But these verifiable defenses, two of which were presented at ICML, so far do not scale to the large neural networks in modern AI systems. Kohli says there is potential to expand them, but Song worries they will have real-world limitations. “There’s no mathematical definition of what a pedestrian is,” she says, “so how can we prove that the self-driving car won’t run into a pedestrian? You cannot!”Carlini hopes developers will think harder about how their defenses work—and how they might fail—in addition to their usual concern: performing well on standard benchmarking tests. “The lack of rigor is hurting us a lot,” he says. Click to view the privacy policy. Required fields are indicated by an asterisk (*) The attacks are striking for their inconspicuousness. Last year, Song and her colleagues put some stickers on a stop sign, fooling a common type of image recognition AI into thinking it was a 45-mile-per-hour speed limit sign—a result that surely made autonomous car companies shudder. A few months ago, Nicholas Carlini, a computer scientist at Google in Mountain View, California, and a colleague reported adding inaudible elements to a voice sample that sounded to humans like “without the data set the article is useless,” but that an AI transcribed as “OK Google, browse to evil.com.”Researchers are devising even more sophisticated attacks. At an upcoming conference, Song will report a trick that makes an image recognition AI not only mislabel things, but hallucinate them. In a test, Hello Kitty loomed in the machine’s view of street scenes, and cars disappeared. Email K. Eykholt et al.; arXiv:1707.08945 (2017) ANISH ATHALYE/LABSIX last_img read more

Continue reading