Notes on Artificial Intelligence | www.splicetoday.com | #macos | #macsecurity


In the third year of a virus that isn’t slowing down, we’re approaching unknown territory. The United States is in full-throttle pandemic mode, swerving left and right dodging variants. Our partially-vaccinated nation paddles along in a sea of misinformation, navigating a dense, ad-hoc haze of street shack eateries. Something to check for are trustworthy news sources. The old name for misinformation was propaganda. It’s vital to take time to reflect because many of us are living in different realities.


Should a layperson be concerned with Big Tech’s issues? The bizarre mix of anxiety-ridden paranoia, hyperbole, wishful thinking, and irrational hubris are all pages in the artificial intelligence (AI) playbook.


If you like the Netflix series, it’s easy to assume we’re already living in an episode of Dark Mirror. Then again, many of us are missing our favorite television shows; we’re busy standing in a line waiting to get tested. For the millions of people who use social media and other platforms on a regular basis, security threats pose a significant risk. Everything’s fair game.


I’ve often wondered, does AI have a sense of humor? Probably not, I haven’t seen any HBO stand-up specials. It’s difficult for a machine to recognize irony and sarcasm, especially when it comes to funny. A machine lacks human empathy. An AI idea of a joke comes off more on the scary side.


Remember how a vinyl LP record skips? I saw an AI program running on a Mac make a lame attempt at comedy; it kept repeating the same words “artificial intelligence” across the screen hundreds of times in 10-point green type. HAL 9000: I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. —2001 A Space Odyssey.


AI makes mistakes, just like people. Amazon’s Alexa recently caused a mom to freak out. Her 10-year-old daughter requested a physical challenge. The digital assistant’s solution was a TikTok viral challenge. Drop a penny on a phone charger connected halfway into an electrical socket. Posting videos like this online normalizes dangerous behavior, not surprising in an era of America’s Funniest Videos and Tide Pod eating.


As technology progresses, legislation seems forgotten. There are so many risks lurking around every corner. A few concerning areas: the use of automation, deep fakes, monetary manipulation, data breaches and automatic weaponry.


Working in AI must be daunting, but it’s profitable. According to Forbes: surveillance industries are expected to grow to $7 billion by 2024 in the U.S. alone. Technical shortfalls continue to exist. Take for instance, the use of facial recognition technology. To be fair, several crimes have been solved and lives saved, but it’s not above misuse and scrutiny. The system’s used for passenger screening, law enforcement and employment decisions; but it’s not always accurate. There’s plenty of privacy and racial discrimination concerns when it comes to your face.


What about homophobic, racial and sexist implications? Ex-Google employee and well-respected researcher Timnit Gebru has publicly spoken about the inequality. The Ethiopian-born Stanford University researcher was hired in 2018 by Google to help build an Ethical AI team and then “resigned” in 2020. As an advocate for diversity, one of her points is: technology builds bias into its programs.


In December 2021, Gebru launched the Distributed Artificial Intelligence Research institute (DAIR) an independent, community-rooted institute set to counter Big Tech’s pervasive influence on the research.


Privacy? Have you ever wondered if your wearable fitness tracker is a medical device or surveillance tool? Exercise data can be recorded. There are no current clear regulations applied to wearable technology. Considering the health industry has a high number of data breaches, perhaps a little corporate hygiene’s needed here.


What if you were out shopping and someone stuffed a $29 Apple AirTag in your bag? When you return home, your iPhone pings a distressing alert that reads “AirTag Detected Near You.” Is this happening because a creep followed you down the produce aisle at the supermarket? By the way, he’s parked out front. Consider what could occur if your Social Security number, financial data and DNA information obtained through Ancestry were all made available to cyber criminals. Their goal: create an avatar of you, possibly nude, that can be harassed in the Wild West metaverse.


Facebook suddenly gets a name change. What is Meta Mark thinking about? Is this the result of meticulous planning or a desperate attempt to save face? Last October, Zuckerberg attempted to smear Frances Haugen, a Facebook whistleblower. Calls went out instantly to company operatives in Washington, DC, who whipped up a storm like a prairie chicken’s love dance. They planted articles in the media that wreaked havoc on the political system. The frenzy served as a diversionary tactic, diverting attention away from the true issue: Big Tech is just too powerful and they never apologize.


Students, the less-marginalized unemployed and professionals across the country are bewildered. Silicon Valley officials have testified at Congressional committees regarding questionable business practices and content moderation. Their responses are full of sweet talk and jargon-riddled phrases. The weary replies are highlighted by the same reassurances. All sides seem to agree new laws are needed. What will happen if corporations don’t follow the rules? How will this affect our communities? There’s no need to be terrified, it’s planned out to work this way.


As consumers in a capitalist society, we’re also involved. One thing that’s given me hope is politicians are considering new antitrust legislation to curb Big Tech’s monopolies. The need is urgent, bills must be passed before public policy can be considered. Meanwhile, the rip-off cash grabs and profits from corporate thievery continue. Given today’s hostile social and political context fueled by violent rhetoric, it’s not an easy task.


For the sake of growth, the necessary changes aren’t immediate or painless. Venturing into the unknown presents its own set of perils and problems. Artificial intelligence will continue to rebrand and develop over time. The significant impact on our evolution will, I hope, provide useful insight. However, if the US continues to get poor results, we may have to declare ourselves the “United States of Emergency.”



Original Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

eighty two + = 92