Ethics aren’t easy

A professional’s discussion on protecting users

When I conducted my first usability test, it was easy for me to make predictions of what the participants were going to have problems with, and what they would like about the website.  While some results were surprising, the general thoughts about the website were exactly as I imagined.

User Experience, the magazine of the User Experience Professionals Association wrote an article about a usability test where they knew what results to expect, and knew the pain that it would cause the participants.  The test was see if people with disabilities could use a government kiosk without pain, but the process was even causing the testers pain.

“Asking a person who tires easily or experiences pain when performing manual tasks to click a button a few thousand times would cause, at the very least, significant discomfort and likely lasting physical pain. When we were trying out the product to collect expert timing and button press data, our hands hurt! Projects like this raise important questions about research ethics.

We asked colleagues with disabilities to give us their feedback on using the product. Based on their responses, it was obvious that running a usability evaluation was not going to be reasonable.”

Although it seems like an ethical idea to test usability for a person with disabilities, this test would have only hurt them, and therefor have been unethical.

Instead of conducting an unethical test, the tester should tell the client that the product has issues that must be addressed first.  Although in the commercial world, this could mean losing a client it is a necessary responsibility to value ethics and safety first.

**Featured Image provided by Internet Archive Book Images under no know copyright licenses

Keep Droning On

The abilities of robots are improving everyday. From video drones flown by civilians, to war machines that can engage and attack humans automatically, there are ethical questions behind any machine that can do what a human can’t do alone.

An article in The Atlantic explained that robots are used for national security to complete dull, dirty and/or dangerous jobs. Whether for surveillance or disassembling bombs, robots always act with “dispassion”.  Even in the heat of war, a robot cannot become fatigued, hungry, angry or distracted, and will perform the same regardless of conditions that may be incredibly stressful to humans.

However, robots are rarely completely self automated, and can generally at least be overridden by humans.  However, if a robot has information that human can’t see, such as night vision, who should make the decision?  If a robot can identify a civilian is in danger, but a human operator authorizes an attack anyway, should the robot be able to veto the command, or should it be required to follow the human instruction.

Ethically, the robot would be doing right by saving a human life, but giving a  robot the ability to override humans commands may be a slippery slope that movies like I, Robot warn of. If robots aren’t coded to always obey human operators, the potential for them to act uncontrollably is far more likely.

While automation encourages dispassion and therefor strong ethics, these ethics are still programmed by the robots creator, and an be influenced by the programmer or whoever funds the project. Two robots of the same function could make different decisions if one is  programmed by a computer scientist in Japan and the other is built by the U.S. Navy. Both may make ethical decisions in their home countries, but if they were used in a foreign place their ethics may not match that society’s.

This is currently happening in the United States as Native American’s are fighting to protect their lands and water from an oil pipeline.  There is a standoff of protests between police and protestors in Standing Rock that has been occurring for over six months.  Protestors have been tear gassed, attacked and shot with rubber bullets, so some are using video drones to document what they believe to be unfair treatment.

The drones should not cause any ethical issues because they are simply recording the actions of police, but the drones have been attacked with rocks and even shot at by police officers. Although the surveillance simply encourages transparency from the government and does not pose a threat to officers, these public officials seem to believe their privacy is necessary.

If they were private residents on their own property, I would completely understand shooting down the drones, but as working employees of the government, it only seems as if they have something unethical to hide.

**Feature image provided by Tomwsulcer under CC0 1.0 Universal Public Domain

Online Sex Life

Do the same rules still apply?

With social media sites like snapchat, realistic video games, and virtual reality worlds, many couples and even strangers are exploring ways to express their sexuality over the internet.

Whether the individuals are sharing provocative texts, images, videos, or engaging in virtual sexual acts; there is a new set of dangers and rules that must be considered.

Online sexual acts may seem safer or more innocent, but they can cause lasting issues.

In 2013, California became the first state to pass laws that prevent revenge porn, an issue that occurs when someone posts naked or inappropriate pictures of videos of another person without their permission.

A non-profit, End Revenge Porn, has been a major driving force behind legislative changes in many states.  Their founder, Holly Jacobs, was a victim of revenge porn herself, and has set out to prevent it from happening to others.  After ending a relationship of three years, Jacobs’ Facebook profile picture was changed to a nude photo of her.  According to Miami New Times, hundreds of explicit photos and videos of her were then posted across the internet.

When other students at her university received a video titled “Masturbation 201 by Professor Holli Thometz,” her surname, she filed an injunction against her ex-boyfriend.  It was dismissed, and in another instance she was denied an investigation because she had agreed to take the pictures and videos.

Although she gave no one permission to post the images, the law still gave her no ground to bring justice to the situation. Obviously much worse than just a copyright issue, Jacobs claims revenge porn “ruined” her life, and tighter laws obviously must be put into place to prevent this emotional damage.

Emotional damage from online sexual acts is considered by many to be as serious as the damage possible from physical rape.

Not all sexual encounters on the internet are consensual.  Sexual harassment is sadly common on the internet. In the U.S., subjecting children to sexual images, texts, or suggestions is illegal, but subjecting adults to these acts is not the same.

Although this clearly deserves punishment, many consider unwilling online sexual acts as “virtual rape,” and believe the repercussions should match that of physical rape.

An article on Wired elegantly said my opinion on the matter.

“But I have a hard time calling it “rape,” or believing it’s a matter for the police. No matter how disturbed you are by a brutal sexual attack online, you cannot equate it to shivering in a hospital with an assailant’s sweat or other excretions still damp on your body.

That’s not to say I dismiss the trauma a person suffers after being raped online. Virtual rape is not just a prank, one the target needs to get over or expect as part of a role-playing world.”

Online sexual attacks are completely unacceptable, but despite the realism that the internet can now provide us, the attacks cannot match the devastation of a physical rape. While laws must adapt to protect internet users, lawmakers must be careful when comparing tramatic events.

**Featured image licensed by Ranveig under CC Attribution-Share Alike 2.0 Generic.

 

Does “No Mean No” in Virtual Reality?

Virtual reality is becoming more and more prominent,

and with that comes a new virtual world. I can even walk into Appalachian State University’s library and test out some great VR headsets for free.

As the technology improves, the VR world is appearing more realistic each day. But in order to truly trick our brains into believing in this virtual world, we will need other senses to be simulated. While it is relatively easy to add sound, touch would add an incredibly realistic element.

Imagine being able to feel the grass in a field against your legs, the wind in the air, or reaching out to hold the hand of the person you’re walking with. Although this would all be artificial feelings, in combination with the sight of the virtual world, it would be incredibly convincing to our brains that we were actually experiencing these things.

Perceptual psychology has taught us that our brain can fill in missing information from our senses through patterns.  However, the patterns have issues, and they are issues that we can take advantage of.  To add the sense of touch to virtual reality, we wouldn’t need to add the exact feelings,  but instead add feelings that are consistently inaccurate.  If the feelings all follow a similar pattern, and there are no extreme outliers, our brain will be fooled to fill in the rest of the information for us.

Essentially, if the entire virtual world is all the same level of incorrect, our brain will perceive it all to be correct, therefor truly taking us into he virtual world.

Issues arrive with this however, when the person you were holding hands with in the virtual field is actually Presidential Candidate Donald Trump, who decides to “grab a pu**y” and virtually rapes you.

When our brain is tricked to believe that this a real world, there must be some rules and laws in that world to protect the users.  Especially when people aren’t who they are in real life.  We have seen the issues with child predators and other criminals hiding behind the anonymity of the internet.

Now, imagine a free world where these people could disguise themselves as anyone they wanted to be in any situation.  While this technology could be incredible for the good people in the world, it also opens up an entire new world of problems.

Zoltan Istvan, a Transhumanist U.S. presidential candidate summed this up perfectly for Australian publication, Vertigo:

“We’re approaching an age when we’re going to be rewriting a huge amount of the rules of what it means to either harm somebody, or hurt somebody, or even scare them or bother them. Clearly the controls, the security systems and the anti-hacking software will have to be much better.”

I wish we could all explore the virtual world safely without rules and regulation, but as the technology becomes more realistic, that’s simply not possible.

 

 

**featured image owned and copyrighted by Marina Noordegraaf under CC BY-NC-SA 2.0.

New World Ethics

Do you ever feel like you’re being watched?

With the recent advancements in wearable technology, users can gather more and more information about themselves and the world around them.  From Go-Pro’s to Google Glass, users can collect, photos, video, audio, location and health information.

Sometime this information is locally stored, but many devices are connected to bluetooth, wifi or cellular data, and many must be linked to accounts like Google.

A lot of information is stored online to increase user satisfaction. For example, FitBit stores it’s users information on the cloud so that they can access it on their phone or desktop.  FitBit makes it clear in their terms and conditions that they do not sell or distribute personally identifiable information (PII), except under “limited circumstances.”

Although this sounds great, I dug a little further, and in the fine print, it explains that your PII can be disclosed to others in a “sale of assets.”  So if the company is struggling, FitBit can sell your email, address, name and other information and just send you a notification.

Thankfully, FitBit just has basic information like that. While it could sell access to your google account, and potentially any information stored there, this is small stakes compared to what other private information could be made public.

What about when wearable technology records high quality audio and video? This potentially leaves users vulnerable to be tracked or watched by the company, anyone they “sell assets” to, or potential hackers and hactivists.

This also puts the people around the user at risk of being recorded or tracked without their consent. As wearable tech becomes more common and less noticeable, it will become essentially invisible. Whether in public or private spaces, anyone wearing glasses or a watch could be a spy.

Even more nerve-wracking is that they could be spying on you without knowing.  A hacker could watch some through their (or their family’s) wearable technology. Not only is this invading their privacy, but could prove dangerous and encourage assault, rape or murder.

There are plus sides though.  We’ve recently seen some success in requiring police officers to wear body cameras.  Wearable tech like this would give video evidence in court for many committed crimes.  It would help record everything from traffic accidents to murder, and to convict criminals properly.

Government’s access to these video’s would have to be restricted however.  In the first Presidential debate of 2016, when asked about Homeland Security, Hillary Clinton responded that she thinks “we’ve got to have an intelligence surge, where we are looking for every scrap of information.”

If the government is shifting towards an intelligence surge, and the potential to watch and listen through google glass is available, suddenly the Big Brother scenario becomes bigger, scarier, and more invisible. While I would love to have a recording of all the fleeting moments of my life, I can live without it if Big Brother can watch me get ready for class everyday.

**The featured image is owned and copyrighted by Minecraftpsyco under the Creative Commons Attribution-Share Alike 4.0 International license.