Ethics aren’t easy

A professional’s discussion on protecting users

When I conducted my first usability test, it was easy for me to make predictions of what the participants were going to have problems with, and what they would like about the website.  While some results were surprising, the general thoughts about the website were exactly as I imagined.

User Experience, the magazine of the User Experience Professionals Association wrote an article about a usability test where they knew what results to expect, and knew the pain that it would cause the participants.  The test was see if people with disabilities could use a government kiosk without pain, but the process was even causing the testers pain.

“Asking a person who tires easily or experiences pain when performing manual tasks to click a button a few thousand times would cause, at the very least, significant discomfort and likely lasting physical pain. When we were trying out the product to collect expert timing and button press data, our hands hurt! Projects like this raise important questions about research ethics.

We asked colleagues with disabilities to give us their feedback on using the product. Based on their responses, it was obvious that running a usability evaluation was not going to be reasonable.”

Although it seems like an ethical idea to test usability for a person with disabilities, this test would have only hurt them, and therefor have been unethical.

Instead of conducting an unethical test, the tester should tell the client that the product has issues that must be addressed first.  Although in the commercial world, this could mean losing a client it is a necessary responsibility to value ethics and safety first.

**Featured Image provided by Internet Archive Book Images under no know copyright licenses

Advertisements

Keep Droning On

The abilities of robots are improving everyday. From video drones flown by civilians, to war machines that can engage and attack humans automatically, there are ethical questions behind any machine that can do what a human can’t do alone.

An article in The Atlantic explained that robots are used for national security to complete dull, dirty and/or dangerous jobs. Whether for surveillance or disassembling bombs, robots always act with “dispassion”.  Even in the heat of war, a robot cannot become fatigued, hungry, angry or distracted, and will perform the same regardless of conditions that may be incredibly stressful to humans.

However, robots are rarely completely self automated, and can generally at least be overridden by humans.  However, if a robot has information that human can’t see, such as night vision, who should make the decision?  If a robot can identify a civilian is in danger, but a human operator authorizes an attack anyway, should the robot be able to veto the command, or should it be required to follow the human instruction.

Ethically, the robot would be doing right by saving a human life, but giving a  robot the ability to override humans commands may be a slippery slope that movies like I, Robot warn of. If robots aren’t coded to always obey human operators, the potential for them to act uncontrollably is far more likely.

While automation encourages dispassion and therefor strong ethics, these ethics are still programmed by the robots creator, and an be influenced by the programmer or whoever funds the project. Two robots of the same function could make different decisions if one is  programmed by a computer scientist in Japan and the other is built by the U.S. Navy. Both may make ethical decisions in their home countries, but if they were used in a foreign place their ethics may not match that society’s.

This is currently happening in the United States as Native American’s are fighting to protect their lands and water from an oil pipeline.  There is a standoff of protests between police and protestors in Standing Rock that has been occurring for over six months.  Protestors have been tear gassed, attacked and shot with rubber bullets, so some are using video drones to document what they believe to be unfair treatment.

The drones should not cause any ethical issues because they are simply recording the actions of police, but the drones have been attacked with rocks and even shot at by police officers. Although the surveillance simply encourages transparency from the government and does not pose a threat to officers, these public officials seem to believe their privacy is necessary.

If they were private residents on their own property, I would completely understand shooting down the drones, but as working employees of the government, it only seems as if they have something unethical to hide.

**Feature image provided by Tomwsulcer under CC0 1.0 Universal Public Domain

Data collection can save lives

but people are worried it could ruin lives too.

Companies often use algorithms to optimize users’ experiences on their websites.  Sites like Facebook collect user data to provide them information that is more likely what they are interested in.  Facebook can also tailor advertisements to target audiences, and can therefor receive more advertising money.  In this seemingly win-win situation for Facebook and the user, who is losing?

According to a report from French think-tank Forum d’Avignon, even the consumers are benefitting from this technology, “are more and more suspicious about this massive capture of (their) data.

Data capture is not only collecting what sites you visit frequently, but is also collecting and creating an image of who you are.
“The depth (and the intimacy) of personal data collected without our necessarily being aware of it enables third parties to understand our identity, our private and cultural past, present and future lives, and to sell them.”
Data collection is still evolving, because businesses can market their understanding of their audience to advertising companies better. Most social media sites even allow users to promote tweets.  When the Appalachian State University Men’s Ultimate team promoted their team in the beginning of the year, they were able to select the age range and location of who they wanted to promote to, but for big businesses, the options are much more descriptive, and therefor worth more.
 With companies profiting off of human identity, and other groups able to push their products or ideas to select audiences, it brings a lot ethical questions to consumers, businesses, and governments.
In the first presidential debate of the 2016 U.S. election, Hilary Clinton called for an “intelligence surge” to protect citizens from homegrown terrorists. But by homegrown citizens are terrorists too, so this intelligence surge would mean that more and more of regular citizens information would be tracked by the government.
According to a New York Times article, “The United States already collects and shares more intelligence than ever.” Some of citizen’s information rights are still protected though.
“And the F.B.I. is not allowed to conduct open-ended investigations without evidence of criminal wrongdoing. Nor is it allowed to collect intelligence solely related to people’s views. Admiring Osama bin Laden or the Islamic State or expressing hatred for the United States is not a crime.”
Without these protections, the U.S. would not be defending the independence and democracy that it was founded on.  There must be a balance of safety and freedom.

In Europe during Forum d’Avignon, 500 participants came to the conclusion that we must build a society that is “aided – not driven – by data.” To do this they believe data collection must balance “research, economic and social development, and the protection of personal

data.”
This idea realizes that all personal data cannot be protected, or else we wouldn’t be a society, but just a group of completely independent people.  Our individual stories and information is what makes us human, and sharing it with others is part of the human experience.  By balancing research, development and protection, society can interact safely and fairly in the online world.
Data collection can not only streamline our online experience, but can help people in the real world.  After the devastating earthquake in Japan in 2011, Facebook developed a feature to track who is safe during disasters.  Now families and friends of those in a affected can quickly see if their loved-ones are accounted for.
Data-collection can even save lives.  Computer scientists and social workers are collaborating at University of South Carolina to create an algorithm that can “identify the best person in a given homeless community to spread important HIV prevention information among youth, based on a mapped-out network of friendships,” according to an article on Mashable.
Being 60% more effective at spreading information than word-of-mouth, this data-collection algorithm will help teach homeless people simple health education such as “the importance of wearing condoms” and where and how to get tested for HIV.
Education is key to preventing HIV from spreading among homeless people, and maybe this technology can be used to help teach homeless people more skills to get out of the streets.  Watch the TEDx Talk  below to learn more about educating the homeless.

**Featured Photo by Michael Maggs, Wikimedia Commons. 

Does “No Mean No” in Virtual Reality?

Virtual reality is becoming more and more prominent,

and with that comes a new virtual world. I can even walk into Appalachian State University’s library and test out some great VR headsets for free.

As the technology improves, the VR world is appearing more realistic each day. But in order to truly trick our brains into believing in this virtual world, we will need other senses to be simulated. While it is relatively easy to add sound, touch would add an incredibly realistic element.

Imagine being able to feel the grass in a field against your legs, the wind in the air, or reaching out to hold the hand of the person you’re walking with. Although this would all be artificial feelings, in combination with the sight of the virtual world, it would be incredibly convincing to our brains that we were actually experiencing these things.

Perceptual psychology has taught us that our brain can fill in missing information from our senses through patterns.  However, the patterns have issues, and they are issues that we can take advantage of.  To add the sense of touch to virtual reality, we wouldn’t need to add the exact feelings,  but instead add feelings that are consistently inaccurate.  If the feelings all follow a similar pattern, and there are no extreme outliers, our brain will be fooled to fill in the rest of the information for us.

Essentially, if the entire virtual world is all the same level of incorrect, our brain will perceive it all to be correct, therefor truly taking us into he virtual world.

Issues arrive with this however, when the person you were holding hands with in the virtual field is actually Presidential Candidate Donald Trump, who decides to “grab a pu**y” and virtually rapes you.

When our brain is tricked to believe that this a real world, there must be some rules and laws in that world to protect the users.  Especially when people aren’t who they are in real life.  We have seen the issues with child predators and other criminals hiding behind the anonymity of the internet.

Now, imagine a free world where these people could disguise themselves as anyone they wanted to be in any situation.  While this technology could be incredible for the good people in the world, it also opens up an entire new world of problems.

Zoltan Istvan, a Transhumanist U.S. presidential candidate summed this up perfectly for Australian publication, Vertigo:

“We’re approaching an age when we’re going to be rewriting a huge amount of the rules of what it means to either harm somebody, or hurt somebody, or even scare them or bother them. Clearly the controls, the security systems and the anti-hacking software will have to be much better.”

I wish we could all explore the virtual world safely without rules and regulation, but as the technology becomes more realistic, that’s simply not possible.

 

 

**featured image owned and copyrighted by Marina Noordegraaf under CC BY-NC-SA 2.0.

New World Ethics

Do you ever feel like you’re being watched?

With the recent advancements in wearable technology, users can gather more and more information about themselves and the world around them.  From Go-Pro’s to Google Glass, users can collect, photos, video, audio, location and health information.

Sometime this information is locally stored, but many devices are connected to bluetooth, wifi or cellular data, and many must be linked to accounts like Google.

A lot of information is stored online to increase user satisfaction. For example, FitBit stores it’s users information on the cloud so that they can access it on their phone or desktop.  FitBit makes it clear in their terms and conditions that they do not sell or distribute personally identifiable information (PII), except under “limited circumstances.”

Although this sounds great, I dug a little further, and in the fine print, it explains that your PII can be disclosed to others in a “sale of assets.”  So if the company is struggling, FitBit can sell your email, address, name and other information and just send you a notification.

Thankfully, FitBit just has basic information like that. While it could sell access to your google account, and potentially any information stored there, this is small stakes compared to what other private information could be made public.

What about when wearable technology records high quality audio and video? This potentially leaves users vulnerable to be tracked or watched by the company, anyone they “sell assets” to, or potential hackers and hactivists.

This also puts the people around the user at risk of being recorded or tracked without their consent. As wearable tech becomes more common and less noticeable, it will become essentially invisible. Whether in public or private spaces, anyone wearing glasses or a watch could be a spy.

Even more nerve-wracking is that they could be spying on you without knowing.  A hacker could watch some through their (or their family’s) wearable technology. Not only is this invading their privacy, but could prove dangerous and encourage assault, rape or murder.

There are plus sides though.  We’ve recently seen some success in requiring police officers to wear body cameras.  Wearable tech like this would give video evidence in court for many committed crimes.  It would help record everything from traffic accidents to murder, and to convict criminals properly.

Government’s access to these video’s would have to be restricted however.  In the first Presidential debate of 2016, when asked about Homeland Security, Hillary Clinton responded that she thinks “we’ve got to have an intelligence surge, where we are looking for every scrap of information.”

If the government is shifting towards an intelligence surge, and the potential to watch and listen through google glass is available, suddenly the Big Brother scenario becomes bigger, scarier, and more invisible. While I would love to have a recording of all the fleeting moments of my life, I can live without it if Big Brother can watch me get ready for class everyday.

**The featured image is owned and copyrighted by Minecraftpsyco under the Creative Commons Attribution-Share Alike 4.0 International license.