This article i read titled "Humans Will Be immortal By 2030" talks about one of Googles top engineers and what they think about technology heading towards prolonging life. Ray Kurzweil thinks that in just 15 years we'll be taking the steps to put our minds in to technology like the cloud and become hybrids where we have our minds connected to the internet and back up or minds to the cloud.
The idea seems a little more reasonable to me compared to people wanting to transfer their consciousness to a computer. Being able to put you're entire mind into a computer is still far away since compared to a human mind are still pretty limited with storage and processing power. But to have some kind of connection to the internet with your brain seems like a closer reality than the later. With how much everyone is connected to the internet and in someway is never disconnected because of their phone, computer, or watch everyone is connected to the internet. Connecting our minds could possibly be the next step the human race will take before completely diving into a machine to prolong their life. Some other predictions that he had made with where technology will be at 2020 or 2040 may not be coming around literally at that time though. One he had, that he predicted for 2010, was that computers would be essentially invisible where we will have computers in clothing and furniture. 7 years later I don't think we have anything like that and computers may be small but not so small that we cant see them or have them in our clothing. So even though I think in the coming years it could be possible to connect our minds to the internet and various other kinds of technology will be developed he doesn't have a perfect track record on what advances we have over the years.
9 Comments
The article I read was "Obama Ordered Waves of Cyber-attacks on Iran" and it’s about how the title describes that the United States has been making cyber-attacks on Iran. I really have just thought that countries haven't been making any large attacks or really even could because for the most part there are measures taken so that we couldn't be hit in a major way but maybe that's just me being ignorant. In this scenario it seems the government decided to do this cyber-attack as a greater good so that they could slow and or stop the progress of the creation of nuclear weapons. The problem to this could be like Pandora's Box, if we start developing ways to cyber-attack countries to this extent than other countries will also have to put more resources into this kind of warfare. I do feel like this is an inevitable outcome with how technology is progressing and everything is starting to be on the internet in some way. But just like the article had said at the end, the United States is the most dependent on computer technology, if countries take after the United States making these attacks then other counties will do the same to the United States. This could damage us more than other countries. So I would think we need to be very careful about this type of warfare so that the outcome doesn't end with other computer infrastructure collapsing.
One other risk that now seems like a high possibility to happen again if we continue to use cyber-attacks would be that these viruses or worms that are created can get out and infect other systems. This worm that was created to attack Iran ended up infecting a laptop and when the laptop connected to other systems it got out into the wild per say. Now if a more dangerous virus or worm got out that wasn't so focused on stopping nuclear progress and other means, then if it gets out it could cripple other systems, or even spread to the rest of the world if it got on to the internet. There can be many unforeseen errors in code that we may never find no matter how much time and man power is spent on these cyber weapons. This type of warfare should be on the list of things countries can't use to attack with. What I would rather want is to continue any fights with just normal weapons, nothing too devastating like nukes, bio weapons, or cyber-attacks, because I feel that the repercussions could be more than we might realize. In this article titled "How to Become Virtually Immortal", it talks about this startup company called Eterni.me that is working towards a software that will take an individual’s information from there social media sites and all any personal information that they are willing to share and make a virtual self that can live on after they pass away. I see this eventually becoming a reality mostly because thinking back on movies, mostly si-fi movies, where a parent or important character has some device that is kind of an extension of them that help maybe the main character with questions and helps make the right decision in the movie. People don't see this as a bad thing and will probably find it normal in the future, it just depends how far in the future it will be when it becomes normal. The company comes across like this is a solution to make you live forever but it’s obviously not you yourself that is living forever. You are putting information into this software where it can mimic you and talk like you but that's all it is. It houses a database of a person, how they would talk, what they have done, what experiences they can pass on to others, this would in away help someone carry on knowing that they won’t be forgotten but in the end they still will die.
Other companies like Facebook or Google have only taken small steps to preserve one’s personal data. they have made an option where one can give specific people there personal data like pictures, videos, bank accounts, if they ever pass away so that it isn't lost for good. Eterni.me in some way is doing the same thing but they are taking a much bigger step where you have the person be imitated where you can communicate with them instead of just looking at a picture or only family videos. The company is still small and will need a lot of funding to be able to work in this huge project but from what they say, when the website was launched and people could sign up for this service, thousands of people on the first couple days of the website being open signed up. Then the next few weeks had gotten over ten thousand people. So this may at times sound creepy to have a virtual avatar who talk and acts like you but I think that this software will grow and become another service that everyone can use and will use to help people in the future. In the article "This creepy AI will talk to loved ones when you die and preserve your digital footprint" where a company is working towards making an AI that would mimic a person from data that is obtained through their social data. This seems like a possible future and until this article I didn't even think someone was working towards this kind of software right now. I first learned of this idea from the show "Black Mirror" where a woman uses this kind of software to "bring back" her diseased husband. This software in a way could be created soon, with everyone being on the internet in some way. This thought that you could use a software that would copy you onto the internet when you feed it information about you and how you talk, act and would respond could also give people a sense of immortality.
This can go back and forth with whether people would say that it’s the person or just software and it’s not the person. Majority would have that opinion but there is still the thought when this software is perfected or more refined that it could better mimic the individual in question. Everyone may still know that this is just software but people could still use this as a means to talk to the "person" if they are gone or to help develop this more to resemble the person. Would people want to make a copy of themselves through this software? This could be a form of cloning to have yourself continue to live on after you yourself pass away. This comes back to what I thought that people would find this as a form of immortality. If you leave a piece of yourself that would talk and act like you then people could think that in a way you are still around or alive, however people want to think. This could still be a good use to preserve great minds of history as well. To have an archive of thousands or even much later millions of minds that did this software. I think that this software couldn't really hurt anyone if it gets developed. From what I can understand with this software would be that people could be using this for diseased loved ones and this would be making it harder for them to move on and keep them in place. But this could also be used to just see if you could make the software like you. Or how close could you make it work. This could lead into other kinds of software but right now I'm not positive what. So people could find problems with this software where it weird or wrong to have this but I don't really think that the problems are really that serious to not have it developed. The article "Humans Marrying Robots?" touches on the idea of marriage with robots. This comes to a strange thought currently to me. Robots are pretty basic right now this does say that this is a possibly decades later but even still i think it seems a bit of a stretch. I would think the big hurdle for this to come a reality would be saying that robots are living things or have some kind of rights to be in a relationships. I would have to think that when robots become more human like and can do complicated tasks on their own, that we would view robots as things and not as a person. They are just products that are made and bought for use. To say that marriage can be made with an objects sounds ridiculous.
Robots would need to be at a point of complexity where there is AI in the robot. It thinks and does things on its own. It acts more human and less like a machine so that majority of society can agree that robots are more than materialistic things for people to use and discard whenever. Then would we stop treating robots in general differently? Robots would work for a living and live just like humans now? How far does this go when we start allowing marriage for robot between humans? When people start having actions with robots like later there will be sex robots that may not have an AI but can perform basic actions, would this just be considered a form of masturbation? if so would more advanced robots be under the same standers? These are very difficult questions that can't be answered when robots to this caliber are hypothetical so it’s left to just an opinion and nothing else. I would think for the most part if the robot is pretty humanistic then public opinion would shift closer to the robot having a choice but even still it would be tough to say for sure how everyone would react to something that literally isn't human. The article talks about the past of same sex and mixed race marriage has changed, but one key difference to those issues this one would be that those issues are between humans, not a robot that many could think of as an object, not human. If it's difficult for society to change its ideals and laws about marriage between people, how can it change when it's not between two people but a person and technically an object? I personally wouldn't mind if robots and humans have relationships and think it's perfectly fine, as long as this is during the period when robots are very humanistic and think for themselves. If they both want to have a relationship then they should be able to go for it. Who know, maybe a robot would be better and more devoted to a relationship or marriage. From a link that was on iLearn titled "Self-Driving Cars Will Make Organ Shortages Even Worse" as the title says, this is an issue that will slowly become more of a reality with the progress of self-driving cars becoming a regular occurrence of life. I don't know the details of how hard it is to get an organ aside from just in a general sense understanding there is a long list of patients who need x organ. The one thing that I didn't think about or know is that majority of the organs that are used are from fatal car accidents. So with human error is eliminated with self-driving cars, then accidents would slowly decline when self-driving cars become more and more common.
Just from a basic understanding that would be obvious for this issue since we won't stop the development of self-driving cars so we can keep accidents high, because that would be crazy. We would need to put a larger focus and funding towards alternatives for organs failing. It's not perfect but the artificial organs to replace the bad organs. Just like the movie and play "Repo man". The movie where a guy (repo guy) goes and takes artificial organs from people who could pay the bill for them. A dark movie but I wouldn't completely discount this form being a reality if we have artificial organs. One option that can be implemented right away that I really don't see it as an issue or at least wouldn't understand why it would be, is just have everyone be an organ donor. This could be because my whole life I decided to be one and my whole family are one. But why would you deprive someone, if some unfortunate accident were you pass, of the use of your organs. It's not like you're using them. This may sound like I'm not being sensitive to others beliefs where you think of your body as sacred of something, I really don't know what the case may be, but I would say again, you wouldn't want your organs to be used to save another? Another, maybe closer to being a reality would be to have stem cell research be federally funded again so that we have another way of obtaining organs and one other perk that would come from this method is that the organ that is grown would be considered your organ not an organ that is similar to your blood and body. This would help a lot because I have learned a little about the process because my brother is on the list for an organ and even if he had gotten one he has to change he life style to fit with the organ so that the organ doesn't fail. This could take a year or two to make your body more accustomed to the new organ. So this also adds to this issue of less organs for transplants. They aren't a 100% fix when the transplant happens. The organ, even if you jump through all the hoops and cross all the T's and dot the I's, the organ may only last up to 10 years or so. I think what my point to this shortage of organs in the world is that we need to see this as a bigger issue now more than later. You could say that the same thing has happened the drilling of oil. We didn't really focus on fuel efficiency till the supply and demand was starting to get out of control. This issue is much graver though. Lives are at stake in this one. So while we are developing self-driving cars to help everyone stay safer, we need to see the repercussions of this technology and prepare what is to come so that we can farther help everyone with one piece of technology at a time. In the article "Consider ethics when designing new technologies" by Gillian Christie and Derek Yach (https://techcrunch.com/2016/12/31/consider-ethics-when-designing-new-technologies/) talk about new technologies and if they should be developed and are ethical. For the most part I would say that technology will always move forward and whether it is really ethical or not, it will be made and possibly modified later.
Technologies are forever growing and improving. Self-driving cars will become a reality step by step. They started with cruise control and park assist, then newer cars have autopilot that allow the driver to let go of the steering wheel and allow the car to drive. This doesn't mean the car will drive you to destinations, just that the car will stay in its lane and slow down with the flow of traffic. These aren't completely self-driving-cars but they will lead up to self-driving cars. Not all technologies are controlled through ethical means. You have investors and shareholders and what not who decide if they want to keep funding a project, but this wouldn't mean that with all these people they are thinking "Is this right? Will this piece of technology better society, and the good out ways the bad?”. For most, investors, who could fund these kind of ambitious projects, are looking to get their investment back with interest. The project has to have a visible goal or outcome. People are good and you can find kindhearted investors who might be going out on a limb to invest but I don't really see technology being stopped or not created if it doesn't seem ethical. One example could be the creation of nuclear weapons. To create a bomb that can wipe-out a city doesn't sound very ethical. Maybe these people are evil maybe some aren't. They could seem evil to us but to them this could be there only option for them. Is it right to even have an option like that or used as a means of intimidation? Society may have questions about emerging technology and how it should be used or if there should be regulations on some but this doesn't mean that some technologies won’t be made or won’t continue to grow. We will always be asking questions and won’t have black and white answers but this will not slow down the development of technology. After reading an article titled "The code I'm still ashamed of" (https://medium.freecodecamp.com/the-code-im-still-ashamed-of-e4c021dff55e#.nl64c8785) it was enlightening to think about designing maybe a website or creating some form of code and finding out that it gets used negatively or wrongly. This article is about a man who could tell that this drug company was trying to have a website tailored to women and a quiz that would show if the person would meet requirements for drug recommendations. This quiz would, for the most part, always recommend a kind of drug whether they really needed it or not. He felt awful about taking part in making this quiz since he understood the end result of the quiz. What really made this bad for him was that someone had died due to this drug that was being recommend by the quiz. I don't think he would really need to feel like the cause. He might have been able to voice his opinion about the quiz and the dangers, or he could have just quit and not make it. I think it was still going to be made and would most likely have the same result. In this kind of scenario, I think that this needed to handled differently by the drug company. They were the ones who knew exactly what they were doing. He may have understood that the drug would be recommend almost all the time but he's just designing the website and quiz. At the very most he could have talked to the company about making some changes to the quiz, but other than that I don't believe that any of the repercussions for the quiz were his fault in any way. Though in the end he said that he resigned, but does that help? This company seems "evil" for trying to basically give out a drug that had negative side effects and he will leave creating an opening for another designer to most likely make the same mistakes he made or even not have any issues with it. I think he could have stayed to make sure this doesn't continue to happen. I don't know what else about the company, if they are paying the price for distributing a drug like this, or if nothing really happened to the company. I would have to guess that the company had to pay some damages so to speak for this drug but I don't think that the guy needs to be so ashamed about making this quiz.
|
Ryan WagnerFirst time doing blogs Archives
May 2017
Categories |