https://faroukibrahim-fii.github.io/reading-notes/
If you write code for a living, there’s a chance that at some point in your career, someone will ask you to code something a little deceitful – if not outright unethical.
This happened to me back in the year 2000. And it’s something I’ll never be able to forget.
I wrote my first line of code at 6 years old. I’m no prodigy though. I had a lot of help from my dad at the time. But I was hooked. I loved it.
By the time I was 15, I was working part-time for my dad’s consulting firm. I built websites and coded small components for business apps on weekends and in the summer.
I was woefully underpaid. But as my dad still likes to point out, I got free room and board, and some pretty valuable work experience.
Later, I managed to help fund a part of my education through a few freelance coding gigs. I built a couple of early e-commerce sites for some local small businesses.
By age 21, I managed to land a full-time coding job with an interactive marketing firm in Toronto, Canada.
The firm had been founded by a medical doctor and many of its clients were large pharmaceutical companies.
In Canada, there are strict limits on how pharmaceutical companies can advertise prescription drugs directly to consumers.
As a result, these companies would create websites that present general information about whatever symptoms their drugs were meant to address. Then, if a visitor could prove they had a prescription, they were given access to a patient portal with more specific info about the drug.
Google is experiencing a “moral and ethical” crisis. That’s the view of hundreds of employees at the tech company, who are protesting the development of a censored search engine for internet users in China.
About 1,400 Google employees — out of the more than 88,000 — signed a letter to company executives this week, seeking more details and transparency about the project and demanding employee input in decisions about what kind of work Google takes on. They also expressed concern that the company is violating its own ethical principles.
“Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment,” they wrote in the letter, obtained by the Intercept and the New York Times.
The existence of the censored search tool — dubbed Dragonfly — was revealed earlier this month by the Intercept, sparking outcry within the company’s ranks and drawing harsh criticism from human rights groups across the world. Internal documents leaked to journalists described how the app-based search platform could block internet users in China from seeing web pages that discuss human rights, peaceful protests, democracy and other topics blacklisted by China’s authoritarian government.
Only a small group of Google engineers are reportedly developing the platform for Beijing, and information about the project has been so heavily guarded that only a few hundred Google employees even knew about it. Google has declined to comment publicly on Dragonfly, but Google CEO Sundar Pichai defended the project Thursday during a weekly staff meeting, saying that the project for China is merely in the “exploratory” stage.
The internal backlash among employees represents mounting concerns about whether Google has “lost its moral compass” in the corporate pursuit to enrich shareholders. But it also suggests that the people who make Google’s technology have more power in shaping corporate decisions than even shareholders have. In April, thousands of Google employees protested the company’s military contract with the Pentagon — known as project Maven — which developed technology to analyze drone video footage that could potentially identify human targets.
About a dozen engineers ended up resigning over what they viewed as an unethical use of artificial intelligence, prompting Google to let the contract expire in June, and leading executives to promise that they would never use AI technology to harm others.
The fact that Google employees succeeded in forcing one of the most powerful companies in the world to put ethics before shareholder value is a remarkable feat in corporate America, and signals why workers need an official voice in strategic decisions. Whether or not Google ultimately drops its plan to help China censor information will be a test of how far that power extends.
Following employee protests at Google and Microsoft over government contracts, workers at Amazon are circulating an internal letter to CEO Jeff Bezos, asking him to stop selling the company’s Rekognition facial recognition software to law enforcement and to boot the data-mining firm Palantir from its cloud services.
Amazon employees objected to the Trump administration’s “zero-tolerance” policy at the U.S. border, which has resulted in thousands of children being separated from their parents.
“Along with much of the world we watched in horror recently as U.S. authorities tore children away from their parents,” the letter, distributed on a mailing list called ‘we-won’t-build-it,’ states. “In the face of this immoral U.S. policy, and the U.S.’s increasingly inhumane treatment of refugees and immigrants beyond this specific policy, we are deeply concerned that Amazon is implicated, providing infrastructure and services that enable ICE and DHS.”
“While Mr. Bezos remains silent, Amazon employees are standing up and joining shareholders, civil rights groups, and concerned consumers to call out Amazon’s face surveillance technology for what it is: a unique threat to civil rights and especially to the immigrants and people of color under attack by this administration,” said Nicole Ozer, technology and civil liberties director at the ACLU of California. “We stand in support of these employees’ call on Mr. Bezos to do the right thing. Amazon must stop providing dangerous face surveillance to the government.”
Earlier this week, several Amazon shareholders called on the company to stop selling Rekognition to the police. That backlash has now spread among employees as well.
“Our company should not be in the surveillance business; we should not be in the policing business; we should not be in the business of supporting those who monitor and oppress marginalized populations,” the employee letter states.
Google is committing to not using artificial intelligence for weapons or surveillance after employees protested the company’s involvement in Project Maven, a Pentagon pilot program that uses artificial intelligence to analyze drone footage. However, Google says it will continue to work with the United States military on cybersecurity, search and rescue, and other non-offensive projects.
Google CEO Sundar Pichai announced the change in a set of AI principles released today. The principles are intended to govern Google’s use of artificial intelligence and are a response to employee pressure on the company to create guidelines for its use of AI.
Employees at the company have spent months protesting Google’s involvement in Project Maven, sending a letter to Pichai demanding that Google terminate its contract with the Department of Defense. Several employees even resigned in protest, concerned that Google was aiding the development of autonomous weapons systems.
“How AI is developed and used will have a significant impact on society for many years to come,” Pichai wrote. “These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”
The AI principles represent a reversal for Google, which initially defended its involvement in Project Maven by noting that the project relied on open-source software that was not being used for explicitly offensive purposes. A Google spokesperson did not immediately respond to a request for comment on the new ethical guidelines.
The principles were met with mixed reactions among Google employees. Despite Google’s commitment not to use AI to build weapons, employees questioned whether the principles would explicitly prohibit Google from pursuing a government contract like Maven in the future.
SAN FRANCISCO — In an open letter posted to Microsoft’s internal message board on Tuesday, more than 100 employees protested the software maker’s work with Immigration and Customs Enforcement and asked the company to stop working with the agency, which has been separating migrant parents and their children at the border with Mexico.
“We believe that Microsoft must take an ethical stand, and put children and families above profits,” said the letter, which was addressed to the chief executive, Satya Nadella. The letter pointed to a $19.4 million contract that Microsoft has with ICE for processing data and artificial intelligence capabilities.
Calling the separation of families “inhumane,” the employees added: “As the people who build the technologies that Microsoft profits from, we refuse to be complicit. We are part of a growing movement, comprised of many across the industry who recognize the grave responsibility that those creating powerful technology have to ensure what they build is used for good, and not for harm.”
The letter is part of a wave of tech workers mobilizing this week against the Trump administration’s new “zero tolerance” policy that refers for criminal prosecution all immigrants apprehended crossing the border without authorization. The policy has resulted in about 2,000 children being separated from their migrant parents, raising a bipartisan outcry.
At Silicon Valley companies including Google, Apple and Facebook, employees have in recent days circulated internal emails asking for donations to nonprofit groups that support immigrants. Many have shared information about protests in San Francisco and Washington. And some of the workers have spoken to their managers about the issue or called on internal message boards for their chief executives to respond.