Artificial Intelligence (AI) is steadily closer to becoming a reality when it comes to cars, medical field, and much more. AI is a very controversial topic for many reasons, and one being loss of jobs in the near future. Although AI can be an intrusive idea to many people, there is also a plethora of positive ways on how AI can help the world for the better. AI can be extremely beneficial to the medical field. There are hospital databases that can predict a disease quite accurately with solely a patient’s history record. This is important because screening and diagnosis of difficult diseases can be a lengthy process.
The claim that AI will take everything over can sound like an exaggeration, but the tech industry is pushing towards incorporating AI into existing technology. In a 2016 report written by the Executive Office of the President, it is stated “With the right investment in AI and policies to support a larger and more diverse AI workforce, the United States has the potential to accelerate productivity and maintain the strategic advantages that result from American leadership in AI.” The United States government also understands that limiting the development of AI will set it back in competitiveness with other countries. Instead of inhibiting advancements in AI, the report suggests it should follow a strategy to “invest and develop AI for its many benefits”, “educate and train Americans for jobs of the future”, and to “aid workers in the transition and empower workers to ensure broadly shared growth” The report indicates that AI will be an important technology that will help improve productivity and the economy, but it also understand that there are negative implications if the workforce is not assisted towards adapting to the changes in job opportunities. In order to continue to be a major part in technical innovations, it will be necessary to utilize AI, but it comes with the challenge of taking the right steps towards preparing people to move towards newer jobs that are less menial. If the transition for newer jobs is not done appropriately many people will be left without a way to provide for dependents and will have difficulty in becoming hirable for newer positions. To owners of AI technology, AI is free labor. Although most likely expensive to purchase and maintain, if AI can do an average job to a higher capacity, this can be more tempting to a business. It is difficult to predict what sorts of employments will be replaced by AI in the future. Although AI is controversial due to fact that sometimes it is difficult for even engineers to predict, benefits may possibly outweigh the risks. www.whitehouse.gov/sites/whitehouse.gov/files/images/EMBARGOED%20AI%20Economy%20Report.pdf
0 Comments
Artificial Intelligence (AI) has be progressing over the years. A big focus for large companies now is to create AI-powered assistants that can improve our ease of life. Google is developing an AI assistant which was featured in their Allo messaging app. The assistant is able to notice your habits by monitoring the information on your phone. The assistant “can access information on your devices like contacts or storage...and it can also access ‘content on your screen.’” Being able to track large amounts of user information allows the assistant to provide suggestions on driving routes and restaurant suggestions, but this comes with a risk. Because the assistant needs to be able to track user information from devices, users cannot encrypt the messages they send. Enabling message encryption disables the AI-powered assistant. Facebook Messenger also provides the ability to encrypt messages, but for the same reason the ability to encrypt messages needs to be disabled if a user wants to use Facebook bots in the app. The option of either having intelligent assistants or secure messaging creates a dilemma for users.
People store large amounts of personal information on their devices, so allowing information to be susceptible to stealing puts users at risk. If companies like Google and Facebook want to promote AI-empowered assistants, they should create a way to still keep user information secure. Since companies like Google and Facebook rely on users to provide data to improve their algorithms, they have more incentive for users to turn off the setting that disables encryption. he current options do not give users enough choice. In addition, users tend to not pay attention to privacy concerns and accepts the risk of using tools that make their lives easier. Under the current circumstances, companies like Google and Facebook should take the problem unto themselves to resolve. They are the ones developing the technology that allows for people's’ information to be less secure, so they should be responsible for finding a way to make the technology secure. http://gizmodo.com/googles-ai-plans-are-a-privacy-nightmare-1787413031 Ecommerce centric companies have begun to make use Artificial Intelligence (AI). IBM has created Watson, a piece of software that is able to analyze data and provide insight from it. Using tools like Watson, companies can still create applications that utilize AI even if their engineers do not have the domain specific knowledge necessary to build AI from scratch. Of course, there are also companies who will build their AI from the tools that are usually used, but it is not necessary for all companies to do the same. Under Armour is a company that sells fitness apparel, but they used IBM Watson to create an app that utilizes AI to help customers track fitness information [1]. The app “tracks and analyzes nutrition, sleep data, [and] workouts”. It is supposed that in the future it may offer advice to customers so that they can make training and nutrition decisions that are tailored to themselves. The advice given to a customer would be based on the similarity between other customer profiles. By using IBM Watson, Under Armour was able to create an app that uses AI to predict information for customers. IBM Watson provides companies with a simpler alternative to rolling out a custom AI solution, which makes it more likely for companies to provide better applications. The benefit to customers received for the application has great value. One of the possible drawbacks is that AI requires large amounts of data to become accurate. Depending on the type of data, customers may not mind that their data is stored and analyzed, but there is a potential problem that may be annoying to customers.
As someone mentioned in the comment section of a previous post, the way some companies use data from customers for ad targeting. Ads are usually annoying because they tend to not be relevant and they bombard customers with things they aren’t looking for. Using data from many sources may improve the accuracy of the ads, which may lead to some advertisements being relevant. Even if the ads become more relevant, the situation brings up the question of what data and how much should companies be able to use to target customers. Since companies would be gathering and analyzing large amounts of data per customer, then it could be said that they are stalking their customers. Companies pay attention to every click a person makes and every page they visit. It’s like having a personal stalker that follows you around to everywhere you go, and that will use that information to determine what type of things you would like. The ability to collect data and to make accurate predictions brings with it the question of how much and how should data be collected. I personally think laws should be created to restrict companies. Companies have had too much freedom in the data they have collected from customers, and if it’s not restricted soon we don’t know how far they will go. Technology is changing quickly and becoming more powerful to the point where we are not able to adapt laws and policies quickly enough. Machine learning and Artificial Intelligence (AI) are becoming popular among the tech companies. Companies like Apple, Google, and Amazon use machine learning and AI in many of their products such as their personal assistants. Amazon is known for its online marketplace, but it has recently opened a grocery store in Seattle named Amazon Go[1]. Amazon has used machine learning to automate the checkout at its grocery store. With the use of an app named Amazon Go installed on a phone, a person is able to walk into the store, check in, grab the items they want to buy, and leave. Amazon Go takes care of adding items to a virtual cart and charging the person when leaving the store.
To accomplish this feat, Amazon uses machine learning and computer vision to know when a person grabs something from a shelf and adds it to their cart. The system is also capable of determining the products a person puts back onto a shelf. Amazon uses cameras to monitor the actions of the customers. Amazon uses the footage to know which products are grabbed and by whom [2]. The way Amazon uses camperas poses the question of whether they should be allowed to monitor and track customers with their cameras. Traditional grocery stores also have security cameras, but the footage from those is not closely monitored and processed to categorize customers. Customers who do not want to be tracked by Amazon will have to make the choice of using the product or not. Artificial Intelligence (AI) is a technology that has recently begun to attract attention in the technology industry. Companies like Amazon and Google have made use of AI in their platforms, but recently AI has begun to be heard more because it is pivotal to products they have released, Amazon’s Alexa and Google Home respectively. Both of these products make use of AI. AI requires large data sets in order to train the itself to be very accurate in the problem it solves. A lot of the data used to improve these systems is gathered from users that use the products. For example, Google uses the queries of its users to improve the result it gives to its customers. By knowing which links tend to be more popular for certain queries, an AI can improve the ranking it provides to links on its search. As a product, that makes Google search very reliable, but it means that everything you search over their search engine is gathered by Google. The question of whether this is right or wrong is not something that comes to the mind of the user very often. The user’s focus is to find the answer to their question, so they don’t really mind that what they searched is saved by Google. The questions of whether something small like saving user queries is right or wrong may seem trivial, but with the increasing amount of data companies collect on their users we will need to start somewhere to make sense of the bigger and more challenging questions.
AI includes different kinds of algorithms that can be implemented to solve certain types of problems. One of the types of deep-learning and another is neural nets. These types of algorithms usually take very large amounts of data to improve to a point where the system becomes very accurate or good at what it does. It makes sense that tech companies collect data from their users because it helps to further improve the accuracy and reliability of their products. A more accurate and reliable experience is a good enough reason for many users to accept their data being collected, but there will always be people with different perspectives. A possible solution is to provide users with more choice in what data gets collected from them. Google does have settings that allow you to turn off data collection on certain features, but it sometimes lead to you missing out on a feature. Another type of solution could be to use a technology that utilizes less data to create accurate predictions. Gamalon has created a technology that claims to require much less data to become accurate. They call it probabilistic programming. With probabilistic programming it may be possible to require less data from users and to still create accurate and reliable experiences. Contact me using this public key:
-----BEGIN PGP PUBLIC KEY BLOCK----- Version: Mailvelope v1.6.5 Comment: https://www.mailvelope.com xsFNBFikhekBEADb59JdAzVbmxMrUlZnJS9tGf3/HiUO6XDRhhZRLnIhn7jy Gr5pPNuprYhk9fvfp1syirwJc+c0U/cFIE0frCCMRfwhcTzRsBkfFnc89FaZ dTBR1qcOmvraSDobb1HekX8ouh/gmwsO8lwe0ErMJ6TMcVXLtITLXkr/1uVN oRRFLUVmi32ZLSwS7aQuxSwEiAkKvS7b4MJyWxXfHJM9euu4Ex5zBxxqpehv 1hOGZzFa3BZQc2LeU8crVoKYp9LnMSgiZil/khK/SMOpW9cdmktQCArUq9xy WUNf7Jzc+8++b7aa+Z7RMqChSYJFzWrd9rUivvGHsXebeH09uRwwuGIMJOAk gPaaieUY2oisXWt6fZdpZYec+sFx7dWf3G5F/MitNvHmNAdYNEJuxrrZn6Sg tJuiFtcS0mck/p83nk7Ilcxp3Fz3M+AwqzcZCTOKxVZvVMoqnWCw/GtnHpxZ /oJnXpmUQTYJUImfAuXlfXsZ8xsYVEtLXLo9hGmlQxbbwNjrLCJTBRuM2DKj UNsf7YhQ3jI7LDrPZb4YTVHlyjaGjZVxwD3tMWNPFc59Yx1SemaqHW5PB13/ SqvrT8cADkVJKwP6OQM8goA9qQC4P9qpJr6h5G15Arfp5QjwqeVFPvFtoWV7 EIYshkLRvnxlV4bQ1bMRhLvdXZqIeF/llezGEQARAQABzSVFbGlhc2FyIEdh bmRhcmEgPGVsZ2FuZGFyYUBjc3VtYi5lZHU+wsF1BBABCAApBQJYpIXrBgsJ CAcDAgkQ/R2+J7DUf0oEFQgCCgMWAgECGQECGwMCHgEAAGZ7D/4qv2L2ttR/ sWINfs/VdFNb0DVAzKjGvFbVRJ01TdUy++utj5kSU8Tb3BCxDuUB6tAicqz4 GTBM1XNaDGDa35DV3KIcYQBMNyMxMiGvaWsPGYQccynN/+e1fwU3A61K2Qdu 8NbUmmcayHX9UfmkbIrEnBjaMNe+vSJqxMZsjRTwVgAE/Zvlhv7e/0qktd1G 4xqWi4P2U+SJfNT1kNGvrGPj7IYEk0V9XF+1NYiy5890RIoWM96ZJStWibDY HKi/btBmUHkzSokrPTBMDcmIP+CXr5ZspA+CbpR4LXCIVUFdgFB1op8GAJDm 47C9bh1k2YlPajVvh3AKpMF4bW9QB1MIxwTCBXr5wVuUrikqkvBqpgdhV8iG zTnm2Tk8oQPs4SpSYEG0sgMnU2w5J+G2iG/7op7D3Z5x0jG2N2H0vj/itODi /R0JUZW56il4BWPKeQnBvYVkeXBbJFNIQ5sN80qt+YjEbl5fJOxl8UtQPpLi CjRyxqMsnX22Fnsd7wrRNP72iFa6N7f/bgpIwicvBxtKfD7JIN/BsFNrvaow 2+uUojTSsVe+k0zIiszW96TIW92eNh3bhyI2VUu2A6+JXbZU4BBOe9iBovW8 6Yoy7dE6lq/nw8etRyAOJI0/l6YbPq0xU+WQB2FkdsmgvKuWFBFiO6LT0xCQ lRYtemYyAaUnmc7BTQRYpIXpARAA7HTvcLtbxz8fpxQ3PSLsww0ZRbXQMFGD viSaMUTlu1g48io44gzqJJTI0FxjttXZSSYPJpTvrrwOa03O0B+7rOW7RbIV Cho1iww5h5GhV5XZW1+Rc1R4lgc3bN+F54sB/gW/op0Go75z6LfnkXaMDcMA PgDXseViY9AmGCcsoQ69XIf1tdOkXn8U/8SS7qUpExtDFOCWand7uqIH0X4t EvigrM0RHulv81GNp+2tZRqhkdahLh6e9MO1FsXOSg2EI15sNCrPQjEnsMAm Bg3l935se73Y8wDofT4I+TmUd5XDM6Om7swG06dT8pQSGTElvr7NKM8QtfM3 KKEVTnwycJCUC4j3JiMsE81BYBQh09u2Z42dYBDCE37J1qkHh/fwslZeNCsN vbQmNZffsW3NlqUryFVSSsXrECaMtLJkdimnwrCcTt+jXtNeTP1hjEcf+tJx BZfEKwxj0cyv61cp9tmp85FJ1T9N50//Q1DE5dEqbSnFk5qrzZULE6IxtcM7 P+94EKBeM6StyzxRPrw6k//Zl6Mv575VqplyrMt4cbnuyYWGGeRVqcwULaTZ SJfxlBbUVnxBFYE1iFOFYcMAe14HpHkJqEP5X16/WcJ8QyOyxzUDnnpL1gJI w+5hXuQEY8VVEH/JWkM0pAlkCk//OmiJR45EWHwEi+jmIW+ILWkAEQEAAcLB XwQYAQgAEwUCWKSF7gkQ/R2+J7DUf0oCGwwAALf7EADAP2OZixDKNLiAg9Ku ZMEvATIe6Y4ys89LDujsaf1pT9CaFU3KaVxP/4MOj/aiYPC2isJ0uOGRN7EF NJdJoTHYq8xpH3I97vWCfNv6+eIEMxbouu9HzqkwrSghOYE/DaKpl/tGyLSw DfBJjM6f6gV5MDNmUpOa7dn/Qr9fRur+VwlhIZZgj/UK+DfwK3TCaEgproJE PV3de4sT1m4brwA9G9LCg3DX+Wj/y8R30czjK8fIVRkEfE0WK1tn9t495CKA IvoovwPXVFuaDeUH8ZJ7AvAZcGYfIyd4Fjgui4lP3z7kQcsyB4C/028XA1qd 4yPXlOCcmoibW4Ao0WMVHhERK18umZGP7DOdQX9gFhhc/QkBe53YJz5xRLR2 erS1JNB44OAt4Q5Eol7SqdW+Si2O55BrOnxgkf+BljMxNPa1xJw5vaGlVMS7 /LgpRz4I3OjRs0k5+iInWc5dto57j60nHFG+R51srKeJLASjNnhZJVO0R1rH b1bq3C6LwsUZD6ax6+43b+b6ZN98baUepN7afiibT9rQeW40cv/okJDS8o66 osp0EWMqNVVs/OtmsB+VbgWmV5HwZSneGmoBTr9euQdKJ1p6CxFBiy5VKseK uRM6KQDZmYqzQLTbUopVnQcWWikJ/4w5gFM0VR+S8MTvylhrS02MV6eGQ1CQ cdR+nA== =B8SD -----END PGP PUBLIC KEY BLOCK----- One day a super artificial intelligence may become more intelligent than humans to the point of posing a threat to the survival of humanity. At the moment artificial intelligence is programmed in a way that only allows it to become very good at doing one thing by being able to interpret a single type of environment very well. In order to create artificial intelligence more intelligent than humans, an artificial general intelligence (AGI) must be created that can understand and interpret many general types of environments and information. Deepmind, a company that specializes in artificial intelligence, has the goal of creating AGI that can interpret many types of environments. AGI as a type of artificial intelligence is still far from being sentient
The reason I bring up artificial intelligence that may become more intelligent than humans is because I was recently reminded of an anime I watched. The discussion in the ethics class about the three rules of robotics reminded me of the anime, Time of Eve. Time of Eve takes place in a future where people have robot assistants who can pick up groceries, cook, and even take care of humans. The robots are human like, and the main difference between humans and the human like robots is that the robots must “wear” a holographic halo over their heads at all times. The only place where robots are shown to appear and behave like humans is in a cafe the main character visits where the rule is that customers must not make distinctions between robots and humans. The cafe is considered to be an illegal meeting spot by authorities because robots make themselves indistinguishable from humans. I find the laws of the civilization to be inhuman because of the way the robots are treated as less than humans. If a robot were to obtain a conscious that is at the level of a sentient being, then they should be given equal treatment to humans. The way humans use the robots in the anime is basically slavery. Humans purchase robots, and the robots run errands or tasks. Some robots are even used to satisfy romantic or sexual purposes, but that is only spoken of and not actually shown in the anime. The anime is lighthearted, but the way the robots are treated demonstrates how humans are capable of taking advantage of intelligent beings without regard for the beings personal feelings. If humans were able to create intelligent robots that had personalities and a human like consciousness, then I would hope humans would treat them with dignity and respect and give them protection under the law. Like many things related to artificial intelligence, a reality where humans have robotic assistants that help complete daily tasks is far into the future. I find that type of future to be interesting. I still find the idea that humans will treat robots without respect, even if they are not as intelligent. In general, I suppose I find it disappointing to see humans treat things with disrespect. Humans are capable of greater thoughts and feelings, so they should use them accordingly. A current computer science topic is artificial intelligence (AI). A reason AI interests me is because it could lead to having an intelligent assistant that helps you complete tasks like keeping track of important information for you. Tools that use AI to assist people in their daily lives already exist, but they still have plenty of room to improve. Siri is a feature iPhone’s have that currently uses AI to understand what users tell it. I often hear my sister telling Siri to set alarms for her. What I would find amazing is if a tool like Siri became intelligent enough to recognize people and understand regular communication with people enough to hold actual conversations with people. I would one day like to have an assistant that I can chat with and extrapolate new ideas with.
AI is an interesting topic that can lead to the development of intelligent computers. By itself whether the creation of computer more intelligent than humans is an ethical issue. People discuss whether creating a super intelligent computer because of the possibility it may turn on humanity and decide to exterminate it. Luckily, AI is not near the point where it is harboring a will to decimate humanity, but there are ethical issue revolving the way AI is developed and trained. A possible issue with the development of AI is the data that is used to train it. In an exclusive interview done by Steven Levy with engineers from Apple, he explains “The view from the AI establishment is that Apple is constrained by...its inflexible insistence on protecting user information (which potentially denies Apple data it otherwise might use)” [1]. The matter of using user information to train AI is an ethical issue because the user’s privacy is at risk. The way AI becomes intelligent is by receiving some input from it’s environment and choosing a way to react to that. Because technology has been advancing quickly it is not clear where the boundary for user privacy should be made, and AI will bring the problem closer to the limelight. Another pair of intelligent assistants are Google Home and Amazon’s Alexa. An issue these two bring up also deal with the privacy of user information. Moynihan explains how Google Home and Alexa work, “Once you say those magic words, the voice assistants jump to life, capture your voice request, and sling it to their disembodied cloud brains over Wi-Fi.” [2]. The assistants send people's’ information to their company servers to process queries, and save user queries to process them. User’s personal data is being used to develop the accuracy of these assistants, but it comes at the cost of accepting personal user data being saved on the company's servers. This situations further brings up the question of the boundary for companies to use user data. In order for AI to improve to a point where machines can understand human language and respond with appropriate responses, massive amounts of training data will be needed. One of the easiest places to get new and real data is from users. There’s an ethical issue of privacy that should be answered; the question is not whether companies shouldn’t have the right to save user data, but whether companies should make use of the data without users being aware of the action. Companies get away with taking advantage of the lack of action users can make on products that do save large amounts of personal information. Many times the only options are for users to abstain from using the technology or to accept their information will be collected. Not everyone wants their information to be collected and stored by a private company, so there should be more regulation and policies that change what and how information is used by companies. I find the best solution would most likely be to give users the option to have their information immediately deleted without after processing and for users to be more aware of how their information is used by companies when they decide to use a product. Notes [1] User information usage at Apple:
Sources Moynihan, Tim. "Alexa and Google Home Record What You Say. But What Happens to That Data?" Wired. Conde Nast, 5 Dec. 2016. Web. 27 Jan. 2017. Levy, Steven. "An Exclusive Look at How AI and Machine Learning Work at Apple." Backchannel. Backchannel, 20 Jan. 2017. Web. 27 Jan. 2017. |
Eliasar GandaraEliasar is a computer science student, and a soon to be computer scientist. He grew up in the Salinas Valley, where they don't just grow lettuce, but also computer scientists. Archives
May 2017
Categories |