Ethics aren’t easy

A professional’s discussion on protecting users

When I conducted my first usability test, it was easy for me to make predictions of what the participants were going to have problems with, and what they would like about the website.  While some results were surprising, the general thoughts about the website were exactly as I imagined.

User Experience, the magazine of the User Experience Professionals Association wrote an article about a usability test where they knew what results to expect, and knew the pain that it would cause the participants.  The test was see if people with disabilities could use a government kiosk without pain, but the process was even causing the testers pain.

“Asking a person who tires easily or experiences pain when performing manual tasks to click a button a few thousand times would cause, at the very least, significant discomfort and likely lasting physical pain. When we were trying out the product to collect expert timing and button press data, our hands hurt! Projects like this raise important questions about research ethics.

We asked colleagues with disabilities to give us their feedback on using the product. Based on their responses, it was obvious that running a usability evaluation was not going to be reasonable.”

Although it seems like an ethical idea to test usability for a person with disabilities, this test would have only hurt them, and therefor have been unethical.

Instead of conducting an unethical test, the tester should tell the client that the product has issues that must be addressed first.  Although in the commercial world, this could mean losing a client it is a necessary responsibility to value ethics and safety first.

**Featured Image provided by Internet Archive Book Images under no know copyright licenses


Talk about your timeline

How social media sites can change what we think

I remember when Facebook changed from the chronological timeline to an algorithm that was supposed to show users more of the information that they wanted to see.  At first users could choose which timeline to use, but then all users had to use the new system.  Then Twitter started mixing chronological order with a “while you were away” section of tweets they thought users would like.  Eventually Twitter and even Instagram also converted to completely tailored timeline.

Although the idea of this is to show users information that is most important to them by analyzing what tweets they interact with most, there becomes room for the social media site to influence the the information we consume.  Over time, this has the potential to change individuals thoughts and to shape society.

Millions of people use social media sites, and tailoring timelines requires decisions and rankings of importance.  Companies use algorithms, which are series of questions designed to rank what is most important.

“These are all human choices. Sometimes they’re made in the design of the algorithm, sometimes around it. The result we see, a changing list of topics, is not the output of “an algorithm” by itself, but rather of an effort that combined human activity and computational analysis, together, to produce it.” wrote Tarleton Gillespie in a NiemanLab article.

This gives social media companies a lot of area to push agenda’s, whether for money or a cause.  For example, Facebook generally leads liberally, and the one of their algorithm curator’s recently said that “his fellow curators often overlooked or suppressed conservative topics.” As a left leaning registered independent, I generally don’t mind not having many conservative posts on my timeline, but this can give false perceptions of reality. My Facebook and Twitter feeds made me under the impression that Donald Trump could never win the U.S. election, but based on election night, Trump won by a landslide.

Social media sites have as much power or more than the massive media corporations, and they have don’t have the same regulations of influence.  Media manipulation is what makes lots of money right now, and Forbes explained what this means:

“When the news is decided not by what is important but by what readers are clicking; when the cycle is so fast that the news cannot be anything else but consistently and regularly incomplete; when dubious scandals scuttle election bids or knock billions from the market caps of publicly traded companies; when the news frequently covers itself in stories about ‘how the story unfolded’—media manipulation is the status quo.”

In the case of the 2016 election, this manipulation led me and many others believe that the Democratic Party had it in the bag.  As a big Bernie Sanders supporter, I personally wish that Twitter could have decided the election, because that’s what my feed was full of, but I understand that’s not what the country, (or at least the DNC wanted).  Transparency and accuracy in trends are a necessary responsibility for social media companies with a conscious.

**Featured image is Public Domain


Keep Droning On

The abilities of robots are improving everyday. From video drones flown by civilians, to war machines that can engage and attack humans automatically, there are ethical questions behind any machine that can do what a human can’t do alone.

An article in The Atlantic explained that robots are used for national security to complete dull, dirty and/or dangerous jobs. Whether for surveillance or disassembling bombs, robots always act with “dispassion”.  Even in the heat of war, a robot cannot become fatigued, hungry, angry or distracted, and will perform the same regardless of conditions that may be incredibly stressful to humans.

However, robots are rarely completely self automated, and can generally at least be overridden by humans.  However, if a robot has information that human can’t see, such as night vision, who should make the decision?  If a robot can identify a civilian is in danger, but a human operator authorizes an attack anyway, should the robot be able to veto the command, or should it be required to follow the human instruction.

Ethically, the robot would be doing right by saving a human life, but giving a  robot the ability to override humans commands may be a slippery slope that movies like I, Robot warn of. If robots aren’t coded to always obey human operators, the potential for them to act uncontrollably is far more likely.

While automation encourages dispassion and therefor strong ethics, these ethics are still programmed by the robots creator, and an be influenced by the programmer or whoever funds the project. Two robots of the same function could make different decisions if one is  programmed by a computer scientist in Japan and the other is built by the U.S. Navy. Both may make ethical decisions in their home countries, but if they were used in a foreign place their ethics may not match that society’s.

This is currently happening in the United States as Native American’s are fighting to protect their lands and water from an oil pipeline.  There is a standoff of protests between police and protestors in Standing Rock that has been occurring for over six months.  Protestors have been tear gassed, attacked and shot with rubber bullets, so some are using video drones to document what they believe to be unfair treatment.

The drones should not cause any ethical issues because they are simply recording the actions of police, but the drones have been attacked with rocks and even shot at by police officers. Although the surveillance simply encourages transparency from the government and does not pose a threat to officers, these public officials seem to believe their privacy is necessary.

If they were private residents on their own property, I would completely understand shooting down the drones, but as working employees of the government, it only seems as if they have something unethical to hide.

**Feature image provided by Tomwsulcer under CC0 1.0 Universal Public Domain

Online Sex Life

Do the same rules still apply?

With social media sites like snapchat, realistic video games, and virtual reality worlds, many couples and even strangers are exploring ways to express their sexuality over the internet.

Whether the individuals are sharing provocative texts, images, videos, or engaging in virtual sexual acts; there is a new set of dangers and rules that must be considered.

Online sexual acts may seem safer or more innocent, but they can cause lasting issues.

In 2013, California became the first state to pass laws that prevent revenge porn, an issue that occurs when someone posts naked or inappropriate pictures of videos of another person without their permission.

A non-profit, End Revenge Porn, has been a major driving force behind legislative changes in many states.  Their founder, Holly Jacobs, was a victim of revenge porn herself, and has set out to prevent it from happening to others.  After ending a relationship of three years, Jacobs’ Facebook profile picture was changed to a nude photo of her.  According to Miami New Times, hundreds of explicit photos and videos of her were then posted across the internet.

When other students at her university received a video titled “Masturbation 201 by Professor Holli Thometz,” her surname, she filed an injunction against her ex-boyfriend.  It was dismissed, and in another instance she was denied an investigation because she had agreed to take the pictures and videos.

Although she gave no one permission to post the images, the law still gave her no ground to bring justice to the situation. Obviously much worse than just a copyright issue, Jacobs claims revenge porn “ruined” her life, and tighter laws obviously must be put into place to prevent this emotional damage.

Emotional damage from online sexual acts is considered by many to be as serious as the damage possible from physical rape.

Not all sexual encounters on the internet are consensual.  Sexual harassment is sadly common on the internet. In the U.S., subjecting children to sexual images, texts, or suggestions is illegal, but subjecting adults to these acts is not the same.

Although this clearly deserves punishment, many consider unwilling online sexual acts as “virtual rape,” and believe the repercussions should match that of physical rape.

An article on Wired elegantly said my opinion on the matter.

“But I have a hard time calling it “rape,” or believing it’s a matter for the police. No matter how disturbed you are by a brutal sexual attack online, you cannot equate it to shivering in a hospital with an assailant’s sweat or other excretions still damp on your body.

That’s not to say I dismiss the trauma a person suffers after being raped online. Virtual rape is not just a prank, one the target needs to get over or expect as part of a role-playing world.”

Online sexual attacks are completely unacceptable, but despite the realism that the internet can now provide us, the attacks cannot match the devastation of a physical rape. While laws must adapt to protect internet users, lawmakers must be careful when comparing tramatic events.

**Featured image licensed by Ranveig under CC Attribution-Share Alike 2.0 Generic.


Data collection can save lives

but people are worried it could ruin lives too.

Companies often use algorithms to optimize users’ experiences on their websites.  Sites like Facebook collect user data to provide them information that is more likely what they are interested in.  Facebook can also tailor advertisements to target audiences, and can therefor receive more advertising money.  In this seemingly win-win situation for Facebook and the user, who is losing?

According to a report from French think-tank Forum d’Avignon, even the consumers are benefitting from this technology, “are more and more suspicious about this massive capture of (their) data.

Data capture is not only collecting what sites you visit frequently, but is also collecting and creating an image of who you are.
“The depth (and the intimacy) of personal data collected without our necessarily being aware of it enables third parties to understand our identity, our private and cultural past, present and future lives, and to sell them.”
Data collection is still evolving, because businesses can market their understanding of their audience to advertising companies better. Most social media sites even allow users to promote tweets.  When the Appalachian State University Men’s Ultimate team promoted their team in the beginning of the year, they were able to select the age range and location of who they wanted to promote to, but for big businesses, the options are much more descriptive, and therefor worth more.
 With companies profiting off of human identity, and other groups able to push their products or ideas to select audiences, it brings a lot ethical questions to consumers, businesses, and governments.
In the first presidential debate of the 2016 U.S. election, Hilary Clinton called for an “intelligence surge” to protect citizens from homegrown terrorists. But by homegrown citizens are terrorists too, so this intelligence surge would mean that more and more of regular citizens information would be tracked by the government.
According to a New York Times article, “The United States already collects and shares more intelligence than ever.” Some of citizen’s information rights are still protected though.
“And the F.B.I. is not allowed to conduct open-ended investigations without evidence of criminal wrongdoing. Nor is it allowed to collect intelligence solely related to people’s views. Admiring Osama bin Laden or the Islamic State or expressing hatred for the United States is not a crime.”
Without these protections, the U.S. would not be defending the independence and democracy that it was founded on.  There must be a balance of safety and freedom.

In Europe during Forum d’Avignon, 500 participants came to the conclusion that we must build a society that is “aided – not driven – by data.” To do this they believe data collection must balance “research, economic and social development, and the protection of personal

This idea realizes that all personal data cannot be protected, or else we wouldn’t be a society, but just a group of completely independent people.  Our individual stories and information is what makes us human, and sharing it with others is part of the human experience.  By balancing research, development and protection, society can interact safely and fairly in the online world.
Data collection can not only streamline our online experience, but can help people in the real world.  After the devastating earthquake in Japan in 2011, Facebook developed a feature to track who is safe during disasters.  Now families and friends of those in a affected can quickly see if their loved-ones are accounted for.
Data-collection can even save lives.  Computer scientists and social workers are collaborating at University of South Carolina to create an algorithm that can “identify the best person in a given homeless community to spread important HIV prevention information among youth, based on a mapped-out network of friendships,” according to an article on Mashable.
Being 60% more effective at spreading information than word-of-mouth, this data-collection algorithm will help teach homeless people simple health education such as “the importance of wearing condoms” and where and how to get tested for HIV.
Education is key to preventing HIV from spreading among homeless people, and maybe this technology can be used to help teach homeless people more skills to get out of the streets.  Watch the TEDx Talk  below to learn more about educating the homeless.

**Featured Photo by Michael Maggs, Wikimedia Commons. 

A hactivist group exposed 32 million cheaters,

but is that really that wrong?

A dating website called Ashely Madison seems to think that monogamy is dead, along with it’s  nearly 40 million users.

Made apparent by their slogan, “Life is short. Have an affair” this isn’t a standard dating site, but is actually intended as social network site for married people seeking to have an afair.

While some people do participate in open relationships or polygamy, this site wasn’t designed for those people, but specifically for cheating.

A hacking group named “Impact Team” took particular problem with the cheating site, and stole years of customer records, including their names, accounts, credit card information, addresses, and even their “sexual fantasies”.

They released a statement that Ashley Madison had to be shut down, or they would all of the information.

However, the website stayed up, and  offered users protection of their files for an additional $19 per person.  The Impact Team said that the accounts that paid still didn’t have their information deleted, and after a month of threats, they released all of the information in a single 9.7gb data dump to the dark web.

32 million accounts were exposed to the web. That’s a lot of marriages, jobs, and credit cards ruined, but the hackers showed no remorse, “Too bad for those men, they’re cheating dirtbags and deserve no such discretion,” the hackers wrote. “Too bad for ALM, you promised secrecy but didn’t deliver.”

Sure, the hackers embarrassed a lot of people, but didn’t they know the risk they were taking when they started cheating? Especially through an affair website.

If you knew your friend’s significant other was cheating on them, of course you would tell them.  And that’s exactly what these hackers did.  They exposed 32 million cheating people.  The hacker didn’t ruin their marriages, the cheater did.

Also, if the person was in an open relationship, the significant other wouldn’t care that their partner was cheating, so it isn’t an issue.

Despite the company offering $500,000 for information on the hacker, I believe the Impact Team should be qualified as hactivists.

The Impact Team had a clear cause, outlined what they would do with the information, and even gave the company a month to do something about it.  Instead the company attempted to make more money from the situation, and failed to the protect their users.  Therefor, the Impact Team released the information, including a introduction explaining that not all users were guilty.

Screenshot from Kim Zetter, published on WIRED

The hactivist team may have exposed a lot of information, but the damage was done by the the Ashley Madison website and the cheaters who partook.


**featured image owned and copyrighted by Xlc under CC BY-SA 4.0

Does “No Mean No” in Virtual Reality?

Virtual reality is becoming more and more prominent,

and with that comes a new virtual world. I can even walk into Appalachian State University’s library and test out some great VR headsets for free.

As the technology improves, the VR world is appearing more realistic each day. But in order to truly trick our brains into believing in this virtual world, we will need other senses to be simulated. While it is relatively easy to add sound, touch would add an incredibly realistic element.

Imagine being able to feel the grass in a field against your legs, the wind in the air, or reaching out to hold the hand of the person you’re walking with. Although this would all be artificial feelings, in combination with the sight of the virtual world, it would be incredibly convincing to our brains that we were actually experiencing these things.

Perceptual psychology has taught us that our brain can fill in missing information from our senses through patterns.  However, the patterns have issues, and they are issues that we can take advantage of.  To add the sense of touch to virtual reality, we wouldn’t need to add the exact feelings,  but instead add feelings that are consistently inaccurate.  If the feelings all follow a similar pattern, and there are no extreme outliers, our brain will be fooled to fill in the rest of the information for us.

Essentially, if the entire virtual world is all the same level of incorrect, our brain will perceive it all to be correct, therefor truly taking us into he virtual world.

Issues arrive with this however, when the person you were holding hands with in the virtual field is actually Presidential Candidate Donald Trump, who decides to “grab a pu**y” and virtually rapes you.

When our brain is tricked to believe that this a real world, there must be some rules and laws in that world to protect the users.  Especially when people aren’t who they are in real life.  We have seen the issues with child predators and other criminals hiding behind the anonymity of the internet.

Now, imagine a free world where these people could disguise themselves as anyone they wanted to be in any situation.  While this technology could be incredible for the good people in the world, it also opens up an entire new world of problems.

Zoltan Istvan, a Transhumanist U.S. presidential candidate summed this up perfectly for Australian publication, Vertigo:

“We’re approaching an age when we’re going to be rewriting a huge amount of the rules of what it means to either harm somebody, or hurt somebody, or even scare them or bother them. Clearly the controls, the security systems and the anti-hacking software will have to be much better.”

I wish we could all explore the virtual world safely without rules and regulation, but as the technology becomes more realistic, that’s simply not possible.



**featured image owned and copyrighted by Marina Noordegraaf under CC BY-NC-SA 2.0.

Taylor Swift wormed her way around Apple

Although 45% of American’s are pirating their music,

these same people are still buying as many DVD’s, CD’s, and subscription services as those who do not pirate music, according to a study by Columbia University.

A lot of that just comes down to saving a few dollars here and there.  As the “broke college kid” and music lover that I am, I am constantly illegal downloading music. However, I always try to support my favorite artist’s by buying their albums/EP’s when they’re released.

I’ve purchased albums of Drake, Future, G-Eazy, Logic, Blackbear, and even started my three month free trial of Apple Music to support Chance the Rapper.  And although, I listened (and still do) to “Coloring Book” religiously, it may not have been helping Chance make any money.

When Apple Music was first released in 2015 as a major competitor to Spotify, artist’s were already upset with the pay-out for streaming services.  Even Taylor Swift had removed her music from Spotify.  So when Apple launched their streaming service, they were more than ready to swoop up America’s sweetheart and have practically exclusive streaming rights on her music.

But, Apple tried to sneak a quick one, wanting to keep the royalties from that they made from any music streamed during user’s free three month trial.

This sounds a lot like a grocery store having a buy-one-get-one-free sale, and only paying Kellogs for half of the cereal they bought.  Just because a company is running a sale or promotion doesn’t mean that they producer of that product shouldn’t be paid.

While those 3 months might not make a big difference for our elite musicians like Taylor Swift, it could make the world of a difference for smaller artists.  Especially when a lot of people sign up for the free trial when their favorite artist releases music.

Imagine a small band finally signs a deal with Apple to stream their music, and all of their loyal fans sign up for 3 month trial to support them, but don’t continue to pay after the trial is up.  Regardless of the success of the album, even if it entered the charts, they may have made essentially no money from the deal.  While it’s still a great promotion for their band, money greases the wheels in the music industry, and without it, artists simply can’t compete.

So thank you, Taylor Swift for fighting back. While we know your 250 million dollar net worth doesn’t really need any more padding, you were the voice that the industry needed against Apple. The wealthiest company in corporate history simply didn’t need to listen to small artists.

Despite record companies from across the world protesting the free trial terms, Apple wasn’t backing down.  A quarter of the world’s global music market was refusing to stream their music on Apple, which includes tens of millions of songs, and Adele, yet Apple was too big too persuade.  Thankfully Taylor Swift was a bite out of the apple too big to ignore, and they decided to pay all artists what they deserved.

It’s too bad though, maybe Taylor would have written a break-up song about Apple.


Yes or No? The Dangers of Social Media

Millennials like myself may have been the last generation to experience a childhood without smartphones and social media, but we have certainly been sucked in to its endless information now.  Like many others, I am active on the big three sites; Facebook, Twitter and Instagram.

These site’s short text posts, memes, and videos are allowing people to gain information quicker than ever before.  Within a few minutes I can read news articles, laugh at viral videos, check up with my friends, and read a meme about Donald Trump.

But, is this information overload actually sticking with me?  I often find myself turning off my phone and wondering what I was just looking at for the last 15 minutes.

Social Media “expert” Jim Steyer told the Los Angeles Times that “In a world where everyone is addicted to cellphones, there’s less reflection.” Although people are accessing information at rates never seen before, they aren’t actually analyzing or absorbing it.

This leads to generalizations and allows users to easily be influenced to one opinion or another without knowing the facts.

Social media’s quick content encourages simple “yes” or “no” opinions, which has helped to polarized politics even further.  Instead of researching all of the options,  social media gives users a short video, meme, or unsupported statistic and expects users to make an opinion immediately by liking, sharing, or commenting on the post.

Trump has been using this lack of reflection to gain popularity in the 2016 election. His sporadic, opinion-based tweets have received viral attention.  Despite the tweets nearly always lacking factual support, they force people to make an opinion on it.  Either “Make America Great Again” or “#NeverTrump”.  Regardless, the reactions cause the posts to trend, and they gain more exposure, and more quick opinions to be formed.

Like a reality TV show, this election has been all about popularity instead of the issues, and  as Steyer put it, “Trump understands reality TV.” His celebrity persona gained him attention across social media and broadcast media.  Whether the attention was good or bad, it allowed Trump to gather supporters.

Also,  following a politician, cause, or biased news source can influence opinion’s simply by the nature of social media.  This allows all of the information we are processing by social media to be molded into the scope of these accounts.  For example, users following left-leaning accounts will never see a story about Donald Trump in a positive light, but those following Trump and his supporters will only see pro-Trump propaganda.

Consequently, the followers can inherently believe that everyone agrees with their opinions.  This shared support then adds credibility to the cause, when in reality while the rest of the internet, and the country has their own opinions.  This apparent credibility can be especially dangerous, when the campaign is built on opinion instead of fact.

Although potentially dangerous for our country, Trump has been able to build a credibility for himself among supporters without a political background or even endorsements from any of the living President’s.

Regardless of the cause, social media’s ability to rally people together is why a Forbes article considered social media as “the ideal vehicle to deliver messages asking for support.”

Mild Cyberterrorism

Cyberterrorism: “The politically motivated use of computers and information technology to cause severe disruption or widespread fear.”

As the world shifts further an further into the technological revolution,  society has become increasingly vulnerable to attacks via the internet.  Information can be stolen, communication can be disrupted, and fear, injury, or even death can be spread.  Politically motivated, these attacks can force the hand of governments, businesses, or individuals.

But what about political motivated cyber attacks that don’t cause danger or significant monetary losses?  Hactivism obstructs normal computer activity to peacefully inspire social change.  Similar to a physical sit-in, Hactivism may temporarily change the information on a website, or shut it down to spread a message.

Hactivists however, are rarely treated like peaceful protestors, but instead can be prosecuted for their online activism. In 2010, PayPal, VISA, and Mastercard refused to process donations to Wikileaks, so the group Anonymous organized over 6,000 people to overload their servers using a DDoS attack.  Sixteen members of Anonymous were then arrested and charged with conspiracy and “intentional damage to a protected computer.”

A DDoS attack is simply giving a website so many requests that it is overwhelmed and temporarily cannot function, so how can it be treated differently than a physical sit-in protest?  Both effectively prevent the use of their target, whether it be a restaurant or a website, and while both can cause small financial consequences, neither causes permanent damages.  Regardless, both are performed in public spaces, and are peaceful forms of protest that we should encourage here in the land of the free.

The Computer Fraud and Abuse Act (CFAA) was designed to prosecute hackers to the level of crime committed, and while many of the punishments seems unnecessary, sometimes Hactivists push the line of activism and crime.

In 2011, Anonymous and Lulzsec hacked the Stratford Global Intelligence Services databases and published the credit card information, addresses and passwords of their top clients.  They then used the information to donate small amounts of money to different charities.  However, their Robin Hood scheme landed Jeremy Hammond in prison for 10 years.  While this was much lower than the life-sentence he was threatened with, it is still one of the biggest punishments a hacker has received in American history.

Hammond still believed his work was activism though, telling the Associated Press in  2014 that “From the start, I always wanted to target government websites, but also police and corporations that profit off government contracts.”

Whether the clients at Stratford deserved to be robbed or not, Hammond understood he was breaking the law, and even plead guilty for his charges.  While the U.S. is dealing with internal issues like this, they also have to worry about international attacks. In 2011, the Pentagon decided that “computer sabotage coming from another country can constitute an act of war.”

While it is important that America can respond to such attacks, what if the attack was not from another country, but just an Israeli teenager in his bedroom? Back in 1999, the BBC reported that a 14 year old boy named Nir Zigdon created a virus which successfully destroyed because “it contained lies about the United States and Israel and propaganda against Jews.”

While this shows a true activist spirit, many questions must be asked about the punishments the U.S. would enforce if a foreign teenager hacks because a future President posts anti-muslim propaganda?  I guess we will have to wait and find out.



** Featured image created and owned by Snnysrma under the Creative Commons Attribution-Share Alike 4.0 International license.