Reading 13: Hot or Not: Pirating

The Digital Millennium Copyright Act has several provisions in place to prevent piracy and infringement of intellectual property. As detailed by the Electronic Frontier Foundation posting, one such provision is the safe harbor provision– it protects service providers from damages for the actions of their users. This means that a service provider would not be responsible for one of their users pirating intellectual property. The Wired article on DMCA cites the safe harbor provision as the most beneficial element of the law and crucial to the growth of the internet. Without it, website like YouTube wouldn’t be able to exist. The DMCA also addresses piracy through the anti-circumvention provision– this was meant to discourage pirating and ban black box devices but in reality did very little to limit them. In fact, a great deal of controversy surrounds the anti-circumvention provisions. The Torrent Freak article describes these as a violation of users’ rights, used to “control and spy on people”

I don’t think that it is ethical for users to download or share copyrighted material, yet I find myself doing it on occasion. I grew up receiving burned CDs from my older sister with songs she had downloaded from LimeWire. When my sister left for college it was no longer convenient to get music via these CDs, I turned to YouTube-to-mp3 converters, obtaining music from my Dad’s iTunes library, and swapping CDs with friends. It seems somewhat silly to purchase something that a close friend or relative already owns it, and in the case of “sampling” material, I would certainly download the material illegally before deciding to purchase the CD. While I now have a Spotify subscription and no longer pirate music, I still find myself streaming movies from somewhat-sketchy sites, if Netflix or HBO don’t offer the movie. Even though I think these actions are unethical, at the same time I find myself asking if big-name artists and multi-million dollar movies really need my money. The issue with pirating is that you are so far removed from the creator of the product, so it’s difficult to feel guilt when you pirate. Mindy Kaling has a funny quote in which she addresses the age-old saying about piracy, “Would you steal a car?” She says that she would steal a car, if it were as easy as “touching it, and getting it 30 seconds later” and leaving the original car intact. This is the key issue with pirating: it’s really so easy, that it’s hard to feel guilty when you do it.

I think that the streaming services of Netflix and Spotify do a great deal to address the problem of piracy (even though artists are barely compensated for Spotify listens), but as the LA Times article examines, roughly 20 million people use sites with copyright-infringing music, comparing to the ~8 million who pay for a music subscription. I like that services like Netflix and Spotify relieve me from guilt that accompanies pirating music, but I also see why, with the convenience of pirating music, one would still opt out of a paid subscription. I think that the Slate article paints a future in which streaming it preferred– the author had a long history of pirating, and had gone to great lengths to do so, before opting for the convenience of a paid subscription that stored music in the cloud. I think that piracy is a solvable problem. The author seemed to describe it as such, as pirating now requires extensive knowledge and oftentimes more funds than subscriptions. Piracy may be a solvable problem, but as long as other legal options remain more convenient to use, I am skeptical if it is even a real problem in the first place.

Reading 12: Cars 4

At the moment, it seems that we are on the brink of a revolution in the automobile industry. Tesla has announced that from hereon out, their cars will have the hardware for self-driving capability. They claim that these self-driving cars are far safer than a human driver, with sensors and cameras in areas that a person would not be able to see while driving.

There are a great deal of arguments for self-driving cars in addition to the Tesla article’s explanation of the abilities self-driving cars have that humans do not. As the Reuters article explains, government officials have backed the expansion of self-driving cars, citing reasons like innovation and safety for doing so. Safety is a critical argument behind self-driving cars, as road crashes and deaths are on the rise. 94% of crashes are caused by human error. An automated car would seemingly reduce these numbers drastically.

Despite these arguments in favor of self-driving cars, there are many reasons I remain skeptical of these cars. While advocates make an argument about safety, the ARS Technica article explains that there may not be any way to prove how safe a self-driving car is. While a self-driving car can seemingly avoid accidents humans would be not able to, humans are not quite used to how these cars drive (like a computer). The Seattle Times article explains why self-driving car accidents, contrary to intuition, are actually quite common. A human driver often relies on intuition to make decisions, and even pushes what is legally acceptable. An autonomous car, on the other hand, would be overly cautious. While a car acting cautiously is not necessarily an issue when surrounded by cars acting the same way, I think this would become problematic when self-driving cars start sharing the road with human drivers.

There are a number of social dilemmas an autonomous vehicle might have to face, just as a human driver would have to face them. The Quartz article examines how computers lack intuition, and the ability to look at another person, such as a pedestrian and know what they are thinking. Driving requires a great deal of this type of thinking– predicting if another driver will yield, or if a car sees you. This lack of intuition could put self-driving cars in a number of dangerous situations. I do not know how a programmer could address this social dilemma. Obviously, the car ought to try to avoid accidents or any type of injury, but when these situations inevitably happen, there remains the question of who is liable. If a human driver is involved, the answer to this might come down to whether or not that car was driving within the law, regardless of intuition. If a self-driving car is at fault, then I think the makers of the car ought to be held responsible in some way. This situation, however, could be so complex that I am skeptical of even these conclusions I drew.

I think that the largest social and economic impact of self-driving cars would be the number of people that are put out of work because of them. As we learned last week in our discussion on automation, the act of driving employs 3 million people in the US. The loss of these jobs would result in a large number of unemployed workers. Rather than easily giving the green light to self-driving cars, as the New York Times article suggests has been happening, I think that government officials ought to be warier of the effects of self-driving cars. Traffic and accidents might be reduced, but there are a great deal of implications beyond this.

I would not want a self-driving car. I truly enjoy the act of driving, and the ability to feel in control when I sit behind the wheel. At the moment, though, I do not think the technology for self-driving cars is quite finished. Perhaps once the technology is improved and using the cars becomes safe and the norm, I may change my tune. Until then, I am perfectly fine driving a car myself.

Reading 11: Automatic, Supersonic, Hypnotic, Funky Fresh

I think the Luddites were right to a certain degree about technology and its effect on their jobs. As the BBC article explains, the Luddites were mechanics in the early days of the Industrial Revolution who revolted against fast-growing machinery by destroying it. Their fear was that this automation would eventually put them out of work. These fears were not pretty far-off. Since then, automation and displacement of workers has been favored in order to increase productivity and cut costs. The impacts of automation can be seen throughout a number of industries– the Atlantic article, “A World Without Work” examines how the steel industry in Youngstown, Ohio, was affected by the move toward automation. The far-reaching economic and social effects of the fallout came to be known as regional depression.

The Luddites were certainly right to fear losing their jobs, and today the fear of unemployment due to automation is also pretty well-based. However, in the case of the Luddites and fears of automation in some industries, it’s hard to fully sympathize when consumers can benefit so much from automation. Obviously, the changes that came along with the Industrial Revolution are essential to the productivity and functioning of economies and societies today, and the Newsweek article examined several other benefits of automation, specifically through the use of self-serve gas pumps. Automation often provides solutions that are so much cheaper and safer that it sometimes is difficult to advocate against it.

I’m really not comfortable with artificial intelligence taking over work normally performed by humans. I only think it’s necessary if the benefits of artificial intelligence prove to be far safer and far more efficient than otherwise. In the case of services such as the food industry, the benefits of artificial intelligence do not outweigh the benefits of having a person performing those tasks. I particularly think that “human” activities have no place being left to artificial intelligence, despite any benefits AI might provide. AI is meant to make things more affordable and more efficient, but I simply do not think AI will reach a level where it can out-do the free thinking and creativity of an actual person.

When it comes to developing solutions to the massive loss of jobs that accompanies automation, this can be incredibly difficult. Some people, particularly large companies in Silicon Valley, are proponents of the Universal Basic Income, but the Bright Magazine article pointed out he elitist nature that comes from such an idea. In addition to this, I am skeptical of the efficacy of this universal income– the Atlantic article “A World Without Work” examines the paradox of work: “many people hate their jobs, but they are considerably more miserable doing nothing”. I do not think that a Universal Basic income would solve this problem. But then again, it’s difficult at this point to identify what, if anything, can best mitigate the negative effects of automation. I think that companies ought to be incentivized to keep workers over choosing to automate, and Bill Gates’s suggestion to tax machinery doesn’t seem that crazy. I also think that more ought to be provided to those that do lose their jobs to automation. This could come in the form of furthering their education, or teaching people how to use, fine-tune, and upkeep the AI that put them out of work. 

Ultimately, I am not convinced that automation is a good thing, simply due to the number of people whose livelihood is dramatically affected by automation. However, it’s also clear that automation is nowhere close to stopping. Hopefully someday soon, policies will be in place to mitigate the effects of automation, and to encourage companies to opt for a human presence when they can.

Project 03: WikiLeakin’

As discussed in the podcasts, I was not too surprised about the revelations in Vault 7. I don’t really think there’s much that can surprise me about government surveillance coming off the backs of Snowden’s leak a few years ago. I even think that those revelations weren’t *that* surprising. Perhaps simple-minded of me to think, but I’ve always assumed that my phone and computer were capable of being hacked, and I had watched enough movies to assume the government would be listening. This lack of surprise isn’t an endorsement of the surveillance, though. I firmly believe in the right to privacy. I am not too surprised to learn how little the government believes in that. With this in mind, I also do not want Wikileaks to continue exposing secrets. I not only think that ignorance is bliss, I also think that there is a reason things are classified, and leaking these things for the world to see can be incredibly dangerous.

I do not think it is difficult to separate the message from the messenger. Wikileaks has leaked such a variety of private information spanning different political groups, which makes me think they don’t really show any partisan bias in what they leak. Even though I can acknowledge the documents leaked by Wikileaks and believe in their truthfulness, I still do not necessarily trust them. I ultimately do not agree with their motivations (to leak in order to encourage transparency).

I think that whistleblowing is the ethical thing to do when doing so would not endanger another person. Whistleblowing is also necessary if doing so would avoid endangering someone (as in the case of the Challenger explosion). That being said, there are times to remain silent. If you work for an organization like the CIA, you are bound to keep your work silent. Oftentimes, leaking information could endanger another person or group of people. Whistleblowing can be reckless in these cases.

Transparency, of course, is desirable but I don’t think it is likely to come about. Ultimately, people want things to be private because they would be ashamed if others knew. Or, in the case of the NSA and CIA, things remain public to protect safety and also to protect against public outrage. Of course, transparency would be a great thing to have. We could make more informed decisions and prioritize our values. However, I do not think it is likely that the world will become more transparent anytime soon.

Reading 09: Nothing But Net (Neutrality)

Net Neutrality means that internet service providers have to deliver all online content in the same way. An ISP cannot give preferential treatment to any particular content, and it also cannot ask content producers to pay a higher rate so their content is delivered faster. As the USA Today article explains, “an Internet service Provider will be prohibited from slowing the delivery of a TV show simply because it’s streamed by a video company that competes with a subsidiary of the ISP.”

The IEEE Spectrum Article explains that a recent proposal is attempting to eliminate part of the act that classifies ISPs and common carriers and thus mandates that they must deliver all online content in the same way. This proposal is attempting to relax such regulations based on the argument that the restrictions have hindered investment. The AEI article advocates for this proposal by arguing that net neutrality is crony capitalism and does not actually benefit consumers– instead, content producers argue for net neutrality because it benefits them most.

A number of arguments exist in support of net neutrality. One such argument is that net neutrality is similar to how consumers got landline telephone service, by treating the internet like a public utility (as the USA Today article explains). The Save the Internet article argues that the internet currently allows for so much innovation by giving everyone a platform. Losing net neutrality would mean internet service providers control decisions behind which websites succeed, by blocking content they do not like or applications that compete with those that they offer. This article suggests that websites belonging to those whose voices have historically been shut out could be blocked. I think this argument is pretty weak– it seems far more likely that an ISP block a competitor’s offerings than blocking a specific marginalized groups’ sites.

Reading through these articles, I initially was pro net neutrality, but I have to admit that the AEI article swayed my thinking. I think that the article correctly identified the elements of crony capitalism behind advocates of net neutrality, but ultimately I think that the potential risks of eliminating net neutrality are far worse than the risks of keeping net neutrality. Perhaps advocates are over blowing the risks, but the fact remains that telecom providers would have a dangerous amount of sway behind decision making that could affect consumers and small businesses.

In implementing net neutrality, I think that the simple standard in place today, where an internet service provider has to provide the same quality of service and rates to all customers, is enough. This would, then, mean that the zero ratings AT&T enforces mentioned in the IEEE article wouldn’t be allowed, as this is giving preferential treatment to consumers who use services provided by DirecTV, which AT&T owns. Net neutrality raises concerns of over-regulation and burdening corporations. I really do not know how to address these concerns, aside from the argument that telecom providers have a near-monopoly. Any regulation enforced should protect consumers, rather than the large and successful companies that only stand to benefit from eliminating from net neutrality.

Reading 08: Corporate Personhood and its Implications

Corporate Personhood

Corporate personhood means that a corporation has a legal identity separate from its shareholders. Over the course of history, corporations have been granted various rights typically only given to individuals (including the fourteenth amendment and first amendment). Paul Moreno’s “Corporate Personhood’s Long Life” traces the origins of corporate personhood as far back as Justice John Marshall, and Kate Cox’s “How Corporations Got the Same Rights as People” details the historical precedent of companies being granted such rights. The essential idea, and where many people find fault with it, behind corporate personhood is that a corporation is a collective group of individuals, and therefore must be granted many of the same rights as individuals. This includes freedom of religion– the company Hobby Lobby is able to withhold providing contraceptives to employees on the basis of the owners’ religious beliefs.

There are a great deal of ramifications for corporate personhood. Legally, a corporation cannot be incarcerated in the same way that individuals can. As such, it can be more difficult to punish abuses of power, as seen by the actions of several banks leading up to the housing crisis of 2009. (Kent Greenfield’s article “If Corporations Are People, They Should Act Like It” points out the benefits of punishing corporations instead of individual people by examining the outcomes of an oil spill off the Gulf of Mexico. Searching for culprits would have proved very difficult without the ability to enforce accountability, which corporate personhood provides.) Socially, corporate personhood can have ramifications for ordinary people. Examining the Hobby Lobby case might bring up the question of whether or not a company can opt out of any law if they see it as incompatible with their religious beliefs. From the Hobby Lobby case alone, third parties may suffer– female employees who work for the corporation, do not hold the same religious beliefs as the owners, and yet are not provided with contraception guaranteed by the Affordable Care Act. Ethically, corporate personhood could have ramifications simply because corporations seemingly have unchecked power in many matters. For instance, companies can donated essentially unlimited amounts of money to political campaigns. If this is the case, it appears that the political system is available for purchase by the highest bidders.

Case Study: IBM and the Holocaust

I do not think that IBM was ethical in doing business with Nazi Germany. In fact, there were a number of companies that I think should not have done business with the regime. Jack Smith’s article details how IBM’s Hollerith machine tracked census information to identify Jewish populations across Europe. I think this matter would be more gray had this use of the machine been distant and something IBM wasn’t aware of, but this was not a distant transaction – everything was leased and regularly maintained by IBM technicians, and the chairman and CEO of IBM regularly visited the Reich and occasionally met with Hitler. I don’t think that companies should always be held responsible for immoral uses of their products, but this case demonstrates a scenario in which a company was actively maintaining the technology used for unethical reasons. It’s unlikely that the technicians were oblivious to what the machine was being used for. In situations like this, where the company has reason to suspect their products are being used unethically and still does not act, I think that the company should be held liable.

If corporations are afforded the same rights as individual persons, they should also be expected to have the same ethical and moral obligations. I don’t think that business affairs trump morality. In a state as horrific as Nazi Germany, I really don’t think that IBM’s actions are very defensible. The New York Times article on the book points out that it wasn’t until much later in World War II that people became aware of the Nazi’s very serious attempt to exterminate the Jewish population. Why should companies be expected to foresee the future? To this point, I would argue that the Hollerith machine was consistently maintained by technicians that worked for IBM who were likely aware of its usage in Nazi Germany; was the IBM chairman who frequented Germany completely unaware of the Nazi’s motivations? This alone ought to be enough to hold the company responsible for the use of the machine. I think that it is right to hold IBM responsible for how the Hollerith machine was used by Nazi Germany, and as such to expect other companies to act ethically and refrain from doing business with unethical groups.

Reading 07: Information For Sale

Advertisements are everywhere. On almost every article I read for this blog post, ads were in the sidebars or broke up the article. Though I like to think I am an exception to the subliminal effects of advertising, I also know that I’m not. I’m sure that when I make decisions when buying products, or booking travel plans, or really anything my decisions are affected by the advertisements that cover social media sites, magazines, and public transportation. Of course, the fact that some of these ads are personalized for me contribute to this even more.

I do not really think it is unethical for companies to gather information on their customers and data mine it. If I freely choose to use services by a certain company, I don’t have an issue with them collecting data about the specific services I used. In fact, I normally enjoy when websites are personalized for my use! It’s nice that YouTube knows that I’m looking for the most recent John Oliver video, or that ads for books I might be interested appear on my feed, or anything of that nature. It’s fascinating that Spotify is trying to understand more about users based on the music they listen to. I think such targeted ads are harmless. Sure, there might be times when their data collection and mining methods may seem creepy, such as the “uncanny valley” referenced in the Atlantic article “Data Doppelgangers and the Uncanny Valley of Personalization”. This article explains more about the unsettling feeling some technology might give us. Personalized ads might sometimes miss their target in the generalizations they make about consumer habits, but ultimately I don’t think it’s too invasive to have targeted ads. I think such ads are simply a consequence of a large digital footprint, or relying on certain services. It’s within a company’s right to learn more about its consumers based on spending habits. Complete privacy simply isn’t a realistic expectation when it comes to matters like these, especially when one considers that free services like Facebook make money off of ads.

There is, of course, a line that can get crossed with data collection and mining. The KasperSky article, “The scary side of big data” touches on some of the more worrisome applications of big data collection. What happens if an insurance company has information on your spending habits? The author suggests that consumers reserve cash for some of their “refrigerator purchases”– this way, an insurance company won’t discriminate against someone who occasionally opts for unhealthy purchases, like an order of fries. Other concerning applications of data mining include banks and employers. Banks can track your spending behavior to determine if a customer is suitable for a loan or higher interest rates. Employers can determine which employees would be more likely to leave after a short period of time based on data collection. These examples are, I think, clear invasions of privacy. Instead of trying to sway a consumer’s purchasing habits, these can discriminate against certain consumers based on past, even subconscious, habits.

Ultimately, while I think most data mining is tolerable, I think that companies ought to be held accountable for true invasions of privacy.

Reading 06: Wiki-wiki-word

Edward Snowden is certainly the most important whistle-blower of modern times, as Vanity Fair’s article introduces him. He leaked numerous amounts of NSA and other classified government documents. Among the hoards of information revealed in these leaks included evidence that the NSA had been collecting information without prompting from citizens with no connection to wrongdoing, evidence of surveilling foreign leaders, and information regarding “back-doors” to social media and other applications for the government to collect information on users. Essentially, it became clear that information that had long been taken for granted as private was in fact available at the NSA’s fingertips.

I truly do not know whether to call Snowden a hero or a traitor. A few weeks ago, perhaps I would have leaned toward calling him a traitor. He had worked for the NSA and CIA, where I think it’s generally assumed that the work you do should not be publicized, for safety reasons. However, after reading the Wired article that examines Snowden’s motivations behind the leaks, I understand a bit better that his purpose was not to simply be subversive and undermine the US government, but rather to inform ordinary citizens and hold these government agencies accountable for their unethical actions. The article explains that Snowden worked for various government agencies, and did not think too much about unethical practices until he felt things had gotten out of hand. This took several forms, one of which was a cyberwarfare program called MonsterMind, which automated the process of seeking out the beginnings of a foreign cyberattack. If found, MonsterMind would automatically fire back, regardless of the legitimacy of the source. In addition to this discovery, Snowden was also appalled to hear the the director of national intelligence blatantly deny the NSA’s collection of information on millions of Americans. These factors, in combination with many others, spurred Snowden to leak the documents.

Is what Snowden did ethical and moral? No, I do not entirely think that going straight to the media and leaking this sensitive information was ethical or moral, simply because doing so can pose security and safety risks to the United States. At the same time, I also don’t think that the NSA’s actions were ethical or moral. It is personally challenging to take a stand on this issue because I am torn. On one hand, I firmly believe in a person’s right to privacy. Someone ought to be able to freely use their phone without fear of being tapped. But I also recognize that sometimes privacy ought to be compromised for the sake of security. However, I really can’t envision a practical solution in which someone’s privacy is justly compromised for the sake of security. When people’s privacy is compromised, either specific groups of people are unfairly targeted or everyone compromises their right to privacy– and if either is the case, what kind of right even is privacy?

Snowden’s revelations didn’t really impact my views on the government. Maybe it was my naivete or exposure to too much post-apocalyptic teen fiction, but I had always grown up assuming the government could get its hands on any of my information, and I never even questioned the ethical nature of doing so. Thinking about this issue now, though, I realize that it is far more complex than that.

Project 02: Role Models

Nowadays, it is certainly less challenging for women and minorities to break into STEM fields, compared to Ada Lovelace’s experience. That being said, there are certainly obstacles preventing women and minorities to do so. Simply the differences in numbers between men and women in the field is a barrier preventing women from entering. It creates a notion that this field is not a place for women– if it were, more would be working in it. This alone likely deters women from entering into the field.

We’ve spoken in class about a variety of the barriers preventing equality in numbers among men and women in engineering. Some people attribute it to biological differences that simply won’t change. I think the issue is a bit larger than that. I think that cultural factors influence boys and girls at a young age. Young boys are encouraged to build with legos, while young girls are given dolls (although companies such as Goldie Blox are working to change this). Things as early as these factors can have long-lasting influences. Of course, the disparity in numbers between men and women in the workforce stretches deeper than simply toy choices, but this is just one example of how socialization has powerful consequences on decisions reached in adulthood.

Having role models is incredibly important. Growing up, I had numerous role models, many of them female, and to this day I have a multitude of role models I look up to. Some of most prominent role models in my lives were my parents, my sisters, my teachers, and even some authors I admired. When it comes to the role model who motivated me to pursue Computer Science, it was definitely my dad. My dad always knew that I was mathematically inclined, and that I had a knack and interest in science. He encouraged me to challenge myself with tough math problems for fun, and cultivated my interest in the natural world. At the same time, my dad also recognized that I had a love and passion for history and reading, and cultivated that further through family vacations and book recommendations.

Ada Lovelace certainly lived an admirable and interesting life, but her story didn’t particularly resonate with me. I feel like it wasn’t particularly difficult for her to enter into scientific fields due to her extensive education and access to intellectuals. A less well-off woman in that time would have faced countless barriers. However, I certainly do admire Lovelace’s creativity and clarity that she was able to communicate through the article that contains the first computer program. She certainly was a pioneering woman in the field of programming, and that alone is enough to make her a role model.

Reading 05: Disastrous Mission-Critical Systems and Their Implications

The Therac-25 mission-critical system proved disastrous and deadly, and can primarily be attributed to software bugs. THAT’S CRAZY! I sometimes forget that software is the driving force behind literally everything I use. Like, my car. It’s possible that there could be some type of bug that one day acts up while I’m driving. Even the glitches on my phone are sometimes semi-dangerous, like when I’m using GPS in a completely unfamiliar place, and the app crashes. It is frightening to read about the Therac-25 situation and wonder if something similar could happen with the software I use on a daily basis.

The Therac-25 disaster refers to the series of incidents in which a radiation therapy machine released excessive amounts of radiation to the patients undergoing treatment at the time. After these incidents, four patients died and two were seriously injured. Jamie Lynch’s article traces two root causes for this disaster. First, warning messages that were displayed when something went wrong did not indicate the severity of the warning. Rather than conveying the urgency, or asking verifying questions when critical keys were pressed, the messages simply said “Malfunction 54”. Second, software used in critical safety situations should be subject to review from a body outside of that which designed it. Testing should also be required at the individual unit level, rather than simply systems as a whole.

The Therac-25 incident demonstrates the worst-case scenario for software developers working with safety-critical systems, in which a person dies due to a software bug. The challenges involved with developing safety-critical software is that everything must be considered. Copious amounts of testing must be done to ensure that no matter what, a software bug will not be catastrophic. Developing this type of software is likely stressful and high-pressure, but it also is necessary. Software is proving to be more reliable than human actions, and thus is a safer option (for instance, software that assists doctors in surgeries). However, upon reading about the Therac-25 incident, a part of me wonders exactly how necessary software solutions are, even with its enormous amount of benefits. In this situation, hardware safety-critical loads were replaced with software controls– a solution that was likely cheaper and (in theory) more reliable and less clunky. However, a bug in the software that once would have been mitigated by the hardware would no longer have anything preventing it from manifesting.

Overall, there were a multitude of causes for the Therac-25 disaster, something that Adam Fabio’s Hackaday article points out. While software directly caused the disaster, a number of factors went into this being possible. He points to the lack of safety-critical loads, of timing analysis, of unit testing, and of fault trees as the more serious causes of the incident. When a situation like this occurs, the question arises of who is at fault. I think that it is the software developers and system engineers who designed and tested the software. While I also think that outside groups (an agency perhaps?) ought to test the code as well, the creators of the code are ultimately the ones who understand it best and who should be accountable if the code backfires.