Autonomous Cars Could Eventually Cut 25,000 Jobs Per Month

The automotive industry is growing. From limited assistance to piloting to full automation, car makers strive to limit human intervention. But this promising industry could cause significant job losses worldwide and in the United States in particular.

Independent vehicles have advantages, such as helping the elderly or disabled or avoiding road accidents caused by human error. On the other hand, like automation in general, there will be job losses, and those resulting from autonomous cars can be very important. This is what a new report on the subject, realized by Goldman Sachs, affirms.

According to the report, The Americans will find themselves with a job loss of 25,000 per month, or 300,000 jobs a year, because of semi-autonomous and self-employed cars. The primary victims of this automation will be truck drivers more than any other professional driver, accor

The automotive industry is growing. From limited assistance to piloting to full automation, car makers strive to limit human intervention. But this promising industry could cause significant job losses worldwide and in the United States in particular.

Independent vehicles have advantages, such as helping the elderly or disabled or avoiding road accidents caused by human error. On the other hand, like automation in general, there will be job losses, and those resulting from autonomous cars can be very important. This is what a new report on the subject, realized by Goldman Sachs, affirms.

According to the report, The Americans will find themselves with a job loss of 25,000 per month, or 300,000 jobs a year, because of semi-autonomous and self-employed cars. The primary victims of this automation will be truck drivers more than any other professional driver, according to the report. It should be noted that among the 4 million professional drivers in the United States in 2014, there were 3.1 million truckers.

If Goldman Sachs believes that the full impact of autonomous cars is several decades ahead of us, society recognizes that when it happens, such a number of jobs will be lost in the United States. However, the report states that regulation and slower adoption can delay these effects.

Goldman Sachs estimates that sales of semi-autonomous and autonomous cars should already account for 20% of the total sale of cars between 2025 and 2030. A percentage justified by the profound changes put in place by the manufacturers for more automation.

The official launch of Uber Freight recently and the CEO of Ford who will be replaced by an independent car expert reflect the policies of these manufacturers for the future. But the automation of cars is only a stage of global automation that threatens other professionals like secretaries, cashiers, bank teller, waiters, and realtors.

Other industries such as retail, telecommunications, printing, and publishing have already lost a lot of jobs over the last decade. On the other hand, the sectors of food services, education, computer design or home care seem the best survivors of this wave of automation, according to the report.

United Kingdom: Companies Are Stacking More and More Bitcoins

British companies are prepared to pay up to £136,000 on average to recover critical data taken hostage by ransomware. This is the outcome of the 2017 survey sponsored by Citrix UK, a company that provides collaborative, virtualization and networking products to facilitate mobile work and adoption of cloud services.

The survey covered 500 British companies with a minimum workforce of 250 employees. It follows a precedent launched by the same company a year ago. The average amount of the ransom payment was then £29,544. With the £136,000 pledged this year, there was an increase of 361 percent.

It should be made clear that these companies are complying with one of the major requirements of the authors of ransomware: receiving payments in bitcoins.

The 2017 survey shows that co

British companies are prepared to pay up to £136,000 on average to recover critical data taken hostage by ransomware. This is the outcome of the 2017 survey sponsored by Citrix UK, a company that provides collaborative, virtualization and networking products to facilitate mobile work and adoption of cloud services.

The survey covered 500 British companies with a minimum workforce of 250 employees. It follows a precedent launched by the same company a year ago. The average amount of the ransom payment was then £29,544. With the £136,000 pledged this year, there was an increase of 361 percent.

It should be made clear that these companies are complying with one of the major requirements of the authors of ransomware: receiving payments in bitcoins.

The 2017 survey shows that companies with more than 1,000 employees store an average of 23 BTCs in order to be able to mitigate any eventuality as soon as possible. Approximately 28% of these would store more than 30 BTC, the equivalent of £50,000, ready to satisfy any hacker requirements.

The 2017 survey also shows that the number of businesses with 250 to 500 employees who adhere to such practices increased by 14% compared to 2016. At the same time, The proportion of businesses with 250 to 500 employees who maintain funds in Bitcoins ready to be transferred to hackers is always greater than that of companies with more than 1,000 employees who also store bitcoins.

Taking into account that this poll initiated by Citrix last year already revealed that British companies were storing bitcoins to satisfy any hacker demands. It should not be surprised to review these figures upwards next year when we know that the ransomware WannaCry first hit the British.

But is the approach of setting aside funds to satisfy hackers the right one? The answer is no since the payment does not guarantee the restoration of the data. Rather, these funds could be used to implement better security policies.

OpenAI Designs an AI-Based Algorithm That Allows a Robot to Mimic Tasks Performed by Humans

In December 2015, Elon Musk and some people and companies in the technology industry joined forces to announce the creation of OpenAI, a non-profit organization with the goal of making the results available worldwide Research in the field of artificial intelligence without requiring financial compensation.

At the time of its creation, the founders of the company explained that their researchers will be strongly encouraged to publish their work in the form of documents, blog posts, code, and patents (if any) World. A few years have now passed, and a few days ago, the company announced the availability of a new algorithm based on artificial intelligence.

OpenAI has announced the availability of a framework allowing robots to learn by imitating what they are given to see. Generally, for a system to be able to master the various facets of a task and run it without

In December 2015, Elon Musk and some people and companies in the technology industry joined forces to announce the creation of OpenAI, a non-profit organization with the goal of making the results available worldwide Research in the field of artificial intelligence without requiring financial compensation.

At the time of its creation, the founders of the company explained that their researchers will be strongly encouraged to publish their work in the form of documents, blog posts, code, and patents (if any) World. A few years have now passed, and a few days ago, the company announced the availability of a new algorithm based on artificial intelligence.

OpenAI has announced the availability of a framework allowing robots to learn by imitating what they are given to see. Generally, for a system to be able to master the various facets of a task and run it without problems, it requires learning tests on a broad range of samples. OpenAI, therefore, wanted to go even faster in learning by allowing robots to learn as human beings do.

This gave rise to the “one-shot imitation learning” framework. With this algorithm, a human can communicate to a robot how to perform a new task after executing it in a virtual reality environment. And from a single demonstration, the robot can perform the same task from an arbitrary initial configuration.

Thus one can construct a policy by learning imitation or reinforcement to stack blocks in towers of 3. But with this new algorithm, researchers have succeeded in designing policies that are not specific to a particular task, but rather can be used by a robot to know what to do in a new situation of a task.

In the above video, OpenAI has a demonstration of the formation of a policy that solves a different instance of the same task with as a learning data the simulation observed on another demonstration.

To stack the blocks, the robot uses an algorithm supported by two neural networks, namely a vision network and an imitation network. The vision array acquires the desired capabilities by recording hundreds of simulated images in a task with different lighting, texture, and object disturbances. The imitation network observes a demonstration, milking, reduces the trajectory of the moving objects and then accomplishes the intention starting with blocks arranged differently.

Below the imitation network, it has a process called “Soft Attention” that deals with both the different steps and actions as well as the appropriate blocks to be used in stacking and also the components of the vector specifying the locations of the various blocks in the environment.

The researchers explain that for the robot to learn a robust policy, a modest amount of noise has been introduced into the results of the script policy. This allowed the robot to perform its task properly even when things go wrong. Without the injection of this noise, the robot would not have been able to generalize what he learned by observing a specific task.

Finally, it should be noted that although the “one-shot imitation learning” algorithm was used to teach a robot to move blocks of colored cubes, it can also be used for other tasks.

Is It Better or Not to Allow Users to Paste Their Passwords

Allowing passwords to paste allows web forms to work well with password managers, software (or services) that enable you to choose, save and then enter passwords into forms online at your request.

Password handlers can be very useful in that they:

  • Make it easier to have different passwords for each website you use;
  • Improve productivity and reduce frustration by preventing typing errors during authentication;
  • Make it easier to use long and complex passwords.

However, it should be remembered that while they may offer better protection and prove more convenient than keeping your passwords in a standard, unprotected document on your computer, they are not necessarily the ideal solution to solve an enterprise’s password problems.

Indeed, some of these services may face security breaches. This is the case, LastPass, which recently

Allowing passwords to paste allows web forms to work well with password managers, software (or services) that enable you to choose, save and then enter passwords into forms online at your request.

Password handlers can be very useful in that they:

  • Make it easier to have different passwords for each website you use;
  • Improve productivity and reduce frustration by preventing typing errors during authentication;
  • Make it easier to use long and complex passwords.

However, it should be remembered that while they may offer better protection and prove more convenient than keeping your passwords in a standard, unprotected document on your computer, they are not necessarily the ideal solution to solve an enterprise’s password problems.

Indeed, some of these services may face security breaches. This is the case, LastPass, which recently had to plug a flaw related to its two-factor authentication system.

It is important to note that this type of service/application may encourage:

  • Have multiple passwords on different sites;
  • Do not choose passwords easy to remember;
  • Do not record passwords on a sheet of paper that will be placed on the screen of a computer.

In addition, many services offer you access to your passwords from any platform. Simply update your ID/PIN list on your computer, and you can almost instantly access it on your tablet or phone.

What are the reasons that developers forbid them?

There are also reasons that may justify the fact that developers want to put an end to the possibility for users to paste passwords.

First, one of the reasons mentioned is that pasting passwords allows brute force attacks. If pasting passwords are allowed, this is a vulnerability in which malicious software or Web pages can repeatedly paste passwords into the password box until they can guess your password.

This is true, but it is also true that there are other ways of making assumptions (e.g. via an API) that are just as easy to set up for attackers, and that is much faster. Also, according to the National Cyber Security Center (NCSC), the risk of raw force attacks using the copy/paste function is very low.

Another reason is that pasting passwords makes them easier to forget since users will no longer have to type them. In principle, It is true that the more you appeal to your memories, the less likely you are to forget them.

However, users may have accounts on services that they use on an occasional basis. It means that they do not have enough opportunities to write it and therefore have little chance of remembering it.

For the NCSC, this reason is valid only if you assume, for starters, that users should always try to remember their passwords and this is not always true.

Another reason is that passwords will drag on to the clipboard. When someone copies and pastes, the copied content is kept in a “clipboard” from which it can paste it as many times as it wishes. Any software installed on the computer (or anyone who uses it) has access to the clipboard and can see what is there. Copying anything else usually overloads what was already in the clipboard and destroys it.

Many password managers copy your password to the clipboard so that they can paste it into the password box on the websites. The possible risk is that an attacker (or malicious software) steals your password before it is erased from the clipboard.

Passwords that remain on the clipboard may be a problem if you manually copy and paste your passwords from a document that you have on your computer as you may forget to clear the clipboard.

Most password managers delete the clipboard as soon as they pasted your password on the site, and some even completely avoid the clipboard by typing the password with a “virtual keyboard” at the square.

Viruses installed on your computer can embed clipboard copies on them and grab your pasted passwords. This is still not a good reason to prevent password hack. When your computer is infected, you should simply not trust it at all.

Viruses and other malicious software that copy the clipboard almost always copy all the letters, numbers, and symbols on your computer, including your passwords. They will, therefore, steal your password, whether or not the clipboard, so you do not gain much to prevent pasting passwords.

Artificial Intelligence: Friend or Enemy of Cybersecurity?

Security strategies must undergo a radical revolution. Tomorrow’s security devices will need to see and operate internally among them to recognize changes in the interconnected environments and thus automatically be able to anticipate risks, update and enforce policies.

Devices must have the ability to monitor and share critical information and synchronize their responses to detect threats.

Sounds very futuristic? Not really. A new technology that has recently grabbed attention lays the foundation for such an automation approach. This has been called Intent-Based Network Security (IBNS).

This technology provides extended visibility across the entire distributed network and enables integrated security solutions to automatically adapt to changes in network configurations a

Security strategies must undergo a radical revolution. Tomorrow’s security devices will need to see and operate internally among them to recognize changes in the interconnected environments and thus automatically be able to anticipate risks, update and enforce policies.

Devices must have the ability to monitor and share critical information and synchronize their responses to detect threats.

Sounds very futuristic? Not really. A new technology that has recently grabbed attention lays the foundation for such an automation approach. This has been called Intent-Based Network Security (IBNS).

This technology provides extended visibility across the entire distributed network and enables integrated security solutions to automatically adapt to changes in network configurations and change needs with a synchronized response against threats.

These solutions can also dynamically divide network segments, isolate affected devices, and get rid of malware. Similarly, new security measures and countermeasures can be automatically upgraded as new devices, services, and workloads are moved or deployed to and from anywhere in the network and from devices to the cloud.

The tightly integrated automated security allows for a general response against threats far greater than the total of all individual security solutions that protect the network.

Artificial intelligence and machine learning have become significant allies for cybersecurity. Mechanical learning will be reinforced by devices packed with information from the Internet of Things and by predictive applications that help to safeguard the network. But securing those “things” and information, which are ready targets or entry points for cybercriminals, is a challenge in itself.

The quality of intelligence

One of the greatest challenges of using artificial intelligence and machine learning lies in the caliber of intelligence. Today, Intelligence against cyber threats is highly prone to false positives due to the volatile nature of IoT.

Threats can change in a matter of seconds; one device can be flushed out, infect the next and then re-emptied back into a full low latency cycle.

Improving the quality of intelligence against threats is extremely important as IT teams increasingly transfer control to artificial intelligence to perform work that they otherwise should do. This is an exercise in trust, and this is a unique challenge.

As an industry, we can not transfer total control to an automated device, but we need to balance operational control with essential execution that can be performed by the staff. These work relationships will really make artificial intelligence and machine learning applications for cyber defense really effective.

Because there is still a shortage of talent in cybersecurity, products and services must be developed with greater automation in order to correlate intelligence against threats and thus, determine the level of risk to synchronize a coordinated response automatically.

By the time managers try to tackle a problem on their own, it is too late, even causing a major problem or generating more work. This can be handled automatically, using a direct exchange of intelligence between detection and prevention products or with assisted mitigation, which is a combination of people and technology working together.

Automation also allows security teams to allocate more time to the business goals of the company, rather than spending time in the routine administration of cybersecurity.

In the future, artificial intelligence in cybersecurity will constantly adapt to the growth of the attack surface. Today, we are barely connecting points, sharing information and applying that information to systems.

People are making these complex decisions, which require a correlation of intelligence from humans. It is expected that in the coming years, a mature artificial intelligence system may be able to make complex decisions for itself.

What is not feasible is total automation; That is, transfer 100% of the control to the machines so that they make the decisions all the time. People and machines must work together.

The next generation of “conscious” malware will use artificial intelligence to behave like a human, perform reconnaissance activities, identify targets, choose attack methods, and intelligently evade detection systems.

Just as organizations can use artificial intelligence to improve their security posture, cybercriminals can also start using it to develop smarter malware.

It guided by offensive intelligence set and analysis such as the types of devices deployed in the segment of a network, traffic flow, applications being used, transaction details or the time of day in which they occur.

The longer a threat remains within the network, the greater the ability to operate independently, to blend into the environment, to select tools based on the target platform, and eventually to take countermeasures based on the security tools found in the place.

This is precisely the reason why an approach is needed where security solutions for networks, accesses, devices, applications, data centers and cloud work together as an integrated and collaborative system.

Study: Political Campaigns Can Manipulate Elections With Fake News

In recent months the debate on the spread of fake news has become a topical issue and giants like Facebook and Google have been sharply criticized for their role in spreading false information around the world. The proliferation of false articles and unverified information are considered by many to be a significant feature of the last election campaign in the United States.

A new study by security firm Trend Micro shows that political campaigns can manipulate elections by spending $400,000 on fakes and fake items, according to a new report that analyzes the cost of influencing Public opinion through dissemination of disinformation. The study also found that it only costs $55,000 to discredit a journalist and $200,000 to provoke a protest in the street based on false information. Th

In recent months the debate on the spread of fake news has become a topical issue and giants like Facebook and Google have been sharply criticized for their role in spreading false information around the world. The proliferation of false articles and unverified information are considered by many to be a significant feature of the last election campaign in the United States.

A new study by security firm Trend Micro shows that political campaigns can manipulate elections by spending $400,000 on fakes and fake items, according to a new report that analyzes the cost of influencing Public opinion through dissemination of disinformation. The study also found that it only costs $55,000 to discredit a journalist and $200,000 to provoke a protest in the street based on false information. These disturbing figures show how easy it is to make cyberpropaganda to have real-world results.

This study comes at a time of global concern around the piracy of elections and the different ways fake news on social networks have manipulated voters. The report explores clandestine online spots that allow campaigns, political parties, private companies and other entities to strategically create and disseminate false content to change public perceptions.

Analysis of the Chinese, Russian, English and Middle East fake news services found that these options offer a cost-effective alternative to traditional advertising and promotion efforts often use social networks to broadcast questionable content. Whether you’re in China, Russia, Europe or the US, it’s very easy to buy these services.

With targeted campaigns, false content can provoke protests. So, according to the study, campaigns can create and mount groups on social networks that discuss relevant ideologies for the cost of $40,000, wrote Trend Micro.

To maximize the scope of content, campaigns can spend $ 6,000 to have close to 40,000 I like (“likes”) of “high-quality”. In these fake news services, it will take 5000 dollars to have 20000 comments and 2700 dollars for a false report. Campaigns can go further by buying retweets and other promotional services like placement of videos on YouTube that help news to become viral.

The study noted that these campaigns rely on fake news shared as a reality to court the ideologies of the audience and give an illusion of the future, enough to force people to join a supposed cause.

Manipulating election results can also be relatively affordable for politicians and political parties, according to the report. A campaign manager can buy targeted news sites against $ 3,000 per site and then fill these sites with false information disguised as legitimate news. Maintaining these sites with false content costs $5,000 per month and social networking campaigns cost $3,000 more per month.

Buying rests and biased comments on these content can boost a campaign. Some of these networks also distribute legitimate information allowing sites to have a reputation and blur the boundary between what is propaganda and legitimate content. In total, the study found that an annual campaign of $400,000 must be able to decisively manipulate the course of an action.

A group that also wants to attack a journalist can easily mount a four-week fake news campaign. The weekly propaganda, coupled with 50,000 retweets and attracting 100,000 visits costs 2,700 dollars. In addition to discrediting the journalist, a more frightening consequence is how the report or points that the journalist wanted to raise will be engulfed in a wave of disinformation generated by the campaign, wrote Trend Micro.

Other impressive figures show that an account on social networks can become a celebrity in a month with 300,000 subscribers and for the cost of $2,600.

Given the effectiveness and low cost of these types of propaganda campaigns, some are afraid that they will become commonplace during elections. It is important that we put an end to these practices as quickly as possible before they become mainstream.

Following the debate on fake news and how they spread on social networks especially after the US presidential election, Facebook and Google have begun to develop tools to deal with misinformation.

With Artificial Intelligence, The Travel Industry Can Better Understand its Customers

If you have heard about artificial intelligence (AI) and are updated on recent developments, you may have heard of these terms: deep learning, algorithms, machine learning, etc. As we progress in the flowering of AI, Deep Learning is implemented in various industries and platforms, and we begin to see tangible applications in our daily lives.

It is a fact that the travel industry has much to improve: business travel can turn into nightmares easily, travelers are even more demanding when choosing their travel option and expect full-time care and support.

Artificial Intelligence is already common to most travelers in the form of computer helpers like Ok Google, Siri and Cortana, and technologies like this can respond to current problems.

The abundance of data that travel organizations have, including traveler profiles, action history, airline preferences, and hotels, make the business of the tourism easily appropriated to the AI. Companies such as <a href="https://www.poder.io/&quot;

If you have heard about artificial intelligence (AI) and are updated on recent developments, you may have heard of these terms: deep learning, algorithms, machine learning, etc. As we progress in the flowering of AI, Deep Learning is implemented in various industries and platforms, and we begin to see tangible applications in our daily lives.

It is a fact that the travel industry has much to improve: business travel can turn into nightmares easily, travelers are even more demanding when choosing their travel option and expect full-time care and support.

Artificial Intelligence is already common to most travelers in the form of computer helpers like Ok Google, Siri and Cortana, and technologies like this can respond to current problems.

The abundance of data that travel organizations have, including traveler profiles, action history, airline preferences, and hotels, make the business of the tourism easily appropriated to the AI. Companies such as Poder.IO, specializing in AI solutions for sectors such as travel, are not only architects but also witnesses of what airlines, hotels, and companies associated with tourism are creating to serve travelers differently before, during and after their travels.

An example is Pana, a specialist on-demand travel company, contacts its customers using messages through applications, SMS or email. It consolidates the regular management of the local dialect, with information on the inclinations of the traveler, and uses the AI to indicate to the operators the decisions that more applications in the middle of the reservation procedure.

The KLM airline allows its travelers to obtain reservations confirmations, registration notices, tickets and flight announcements through a Facebook Messenger bot. They can also contact KLM through Messenger all day, every day.

And the Hilton hotel chain is testing Connie, a robotic assistant, powered by IBM Watson and Wayblazer. It can answer questions from visitors regarding courtesies, management, and nearby attractions. The more visitors you connect with Connie, the more you learn, adjust, and improve your suggestions and responses.

On the other hand, Customer relationship management in the travel industry is always about information and AI also helps in this regard. To build reliable relationships with customers and travelers, tourism managers need to know a lot about their customers; this information covers everything from their age, sexual orientation, gastronomic preferences, and interests.

Most of these data can be used throughout the client’s journey to maintain service at phenomenal levels, and be present to throw carrots at the right time to maintain their loyalty. Artificial intelligence can make this happen with just a few clicks.

The AI brings to the light valuable knowledge that travel managers had never conceived possible. This, in principle, should lead to greater customer benefit, better quality advertising and increased loyalty to brands that use their data in the right way.

That is why obtaining the necessary information and releasing the right messages for the right travelers is the biggest challenge for the tourism industry.