A few thoughts on AI (and a bit of history)

One of the heavier branches in my tree of saved and bookmarked-for-the-future library involves this thing called AI.

Artificial Intelligence. You may have heard of it.

While benign and belligerent robots and machines have existed since the dawn of science fiction, it was about ten years ago that I had some sort of epiphany and gradually began saving and/or bookmarking various online articles about artificial intelligence following recent worrisome developments in that field: chatbots were showing signs of coherent speech, Boston Dynamics’ robots had started doing backflips and parkour — but it was demonstrations of autonomous weapons systems that I took particular issue with.

I wanted to keep the articles as “evidence” and “for future reference” — such as for essays like this one, and to backtrack how we got to where we are now.

I’m also fully aware that my voice will go unheard because I’m not an expert academic, public intellectual or a serious developer in this discipline but I am human — and this, by itself, qualifies me to be cautiously curious about the potential extinction of humanity in its current form.

Yes, I did indeed write “curious” (not “concerned”) because I also believe AI to be a catalyst for the next step in human evolution. In fact, I’m as excited as I am worried.

Looking back to 2014, it was no less than Stephen Hawking who warned that we’re not taking the threat of AI seriously enough.

Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

Bill Gates, too, wrote that he is “…concerned about super intelligence.”

First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though, the intelligence is strong enough to be a concern.

If all goes well, I’ll be dead by then.

An open letter from 2015 by the “Future of Life Institute” institute with a list of research priorities signed by luminaries such as Geoffrey Hinton (the Godfather of AI), Stuart Russell, Ilya Sutskever, Elon Musk, Steve Wozniak, Stephen Hawking, Jen-Hsun Huang, Martin Rees, and Francesca Rossi as well as a 2016 report by the White House’s National Science and Technology Council confirmed that there had been significant recent developments to be potentially concerned about. It seemed that we were dancing at the edge of an abyss — although it’s not clear why some of the fearmongers are also investors in the same technology.

When I additionally learnt that the Cambridge and Oxford universities actually have institutions called “The Centre for the Study of Existential Risk” and “The Future of Humanity Institute” respectively I knew I had to sit up and pay attention.

Oh yeah, shit was about to get real!

Ten years ago, in 2015, I didn’t take proper notice when an operation called OpenAI was launched in order to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”. Today they’re as famous as Twitter once was and worth ten times as much.

A pivotal moment occurred with the publication of a paper called “Attention Is All You Need” in 2017 and its proposal of the “Transformer” model architecture (with its self-attention mechanism that allowed simultaneous awareness of an entire piece of context while still processing individual tokens). I won’t pretend to understand much of it but do remember that just about every technology behemoth and startup suddenly veered off in all sorts of directions. Things started happening.

Not long after, OpenAI released GPT-1 (117 million parameters), demonstrating the first application of the aforementioned transformers for generative language modelling at scale (today they’re said to be at over one trillion parameters). The next thing to blow the public’s mind was when Google’s DeepMind AlphaGo defeated world champion Lee Sedol in the game of Go while AlphaFold 2 solves a 50-year-old protein folding problem.

DALL-E demonstrated the ability to generate high-quality images from text descriptions, facial recognition systems deployed by law enforcement highlighted gender and racial bias issues, Stability AI released Stable Diffusion as open-source, Microsoft’s Tay chatbot learned to produce racist content from Twitter interactions, and an AI startup called Playform helped complete Beethoven’s 10th symphony.

GPUs turn out to be better than CPUs at image recognition before, ultimately, becoming core to neural networks. Demand for chips as well as Nvidia’s market value skyrockets.

The UN moved to ban weaponised AI systems, and the EU saw the need for an Artificial Intelligence Act.

Both the United States and China have tried to assure their dominance in AI development and deployment: the former by allowing its market-driven, venture capitalist culture to flourish relatively uninhibited, and the latter in a more top-down, statist fashion as part of its overall industrial strategy. While also wishing to promote AI development and deployment, the European Union has attempted to take the global lead in the governance of artificial intelligence, aiming to “define its own way, based on European values, to promote the development and deployment of AI” — Access Now/Vodafone Institute

As always, it’s the US who innovates while China imitates and the EU regulates.

One problem, by and large, is that people tend to think of AI as a singular entity, this one beast built by one unified party and beholden to the whims of that party — and no, that party is neither Google nor Microsoft pursuing individual commercial interests, nor a nation state such as the USA or China seeking ideological or military superiority — let alone humanity as a homogenous unit. Even within the collective unit known as the European Union there exist differing political systems and therefore differing opinions on regulation. The only thing we can agree on is that the genie is out the bottle and that Google LaMDA engineer Blake Lemoine and various AI ethicists crapped themselves until major AI labs paused or slowed development due to safety concerns before resuming without clear resolution of the identified existential issues:

  • Superintelligence Beyond Human Control – The possibility that AI could exponentially improve itself, surpassing human intelligence and control;
  • Goal Misalignment – AI systems pursuing objectives not aligned with human welfare;
  • Power Concentration – Dangerous concentration of power in the hands of those who control advanced AI.

The bomb dropped in November 2022 when OpenAI released ChatGPT to the public.

Sam Altman's patchy career at OpenAI (image via OpenAI Sora)

Within two months it reached a userbase of 100 million monthly active users, becoming the fastest-growing consumer application in history and bringing AI into mainstream consciousness. Every AI lab rushed to offer competing products; with minimal safeguards in place and thanks to social media, the public was soon fed oodles of propaganda, fake news, Pope Francis wearing a puffer jacket, deepfakes, and more AI slop than you can shake a spoon at.

In 2023, “The Center for AI Safety” issued the statement that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

It wasn’t long before stock photography services, voice actors, video performers, writers, artists, illustrators and other content creators complained about their livelihoods being threatened by subscription services offering unlimited AI-generated alternatives. New utilities (ironically AI-powered) sprang up to aid in detecting whether student essays were written by humans. Some students, in turn, who couldn’t afford premium AI tools suddenly found themselves at a disadvantage to wealthier peers.

Criminals developed techniques for synthetic identity fraud, political disinformation thrived, hackers launched intricate spear-phishing campaigns, the elderly fell for “grandparent scams” using AI-cloned voices of family members, professional networking sites and CVs get stuffed with bullshit AI-enhanced credentials regurgitated by training material obtained via questionable sources, and bootleggers are selling CDs of AI-generated music as the real deal.

“Prompt engineer” became a bona fide job description or, at least, a recognised skill. Software development is supplanted by “vibe coding”.

Whereas previously it was generations of blue-collar workers who were concerned that machinery (think of tractors or assembly line robots) would put their jobs at risk, we’ve finally reached the point that progress is affecting the hitherto safe haven occupied by white-collar employees whose mundane and repetitive intellectual tasks can now be replaced by an algorithm that can ingest, analyse, and spit out a report faster and in the language and format of your choice because — news flash! — we’ve been living in the information age since the middle of the previous century. Artificial intelligence is simply the latest tool allowing us to process the overwhelming volume of data (read: information) office drones are confronted with nowadays with greater efficiency.

Your job is not going to be replaced by AI; it will be replaced by those using AI. — (unknown)

I, of course, also signed up on numerous services — or, in some cases, their waiting list. Interest in these shiny new toys was high, and Americans always get first dibs.

Admittedly, my own early experiments were just that: experiments and goofing about with no specific aim other than testing their individual capabilities and oh, were they flawed! What I found was gross misinformation, badly drawn artwork, and hallucinations (nowadays to be referred to as “confabulations”).

Soon enough I had a very specific question formulated for testing purposes. To this day, almost all “generative AI assistants” fail to deliver an accurate answer despite it being available on Wikipedia — offering some insight into how deeply it was crawled for training purposes. Their ability to string together coherent sentences and follow a full conversational thread is remarkable, though.

Along those lines, it was xAI’s Grok (made available to the uncheckmarked masses in 2024) which managed to uncover — through deep thinking and smart reasoning — more information than I had originally requested. Every new version and every new service is an exponential improvement on its predecessors with clearly differing architectural foundations and privacy standards.

As for artwork, the very ability to create a passable rendition of the wildest imaginary things such as an astronaut riding a unicorn on Mars was a load of fun — but it was hardly useful. I saw little immediate personal benefit other than the pure entertainment of witnessing utter randomness with bad fingers and extra limbs take shape. Nobody will take these images as real, and they have no practical utility (or so I thought at first).

Worst still, to this day, is the inherent bias within these systems: With the majority of AI-companies being US-based and/or English-speaking, generated images of “foreign people” remain riddled with generalisations and stereotypes: French people always wear berets, Austrians only wear traditional alpine clothing, all Indian men wear turbans, and if you ask for an American person there must be a flag in the background — otherwise it’s just a “person” — because a random person performing a random activity shall be rendered as a white male in a Western context. Blame falls on the data it’s been trained on because that’s how image labelling works. The fact that it was geeks — who historically tended to be white and male (and this is my generalisation) — who bothered to tag and label their photos and whose work was ingested is what now bears this strange fruit. Conversely, efforts to infuse generated images with diversity, equity and inclusion have resulted in nothing but historical inaccuracies and embarrassment.

Sorry, wokies, but there were no black nazis!

In fact, even I fell afoul of an early version of Craiyon (based on DALL-E mini) while investigating the topic of racial stereotypes. My name as a prompt depicted me, in grayscale, as a rather Teutonic fellow in what was clearly an SS uniform. While I didn’t save the results, they were reproducible for at least a few weeks and affected the “exotic” names of other colleagues in varyingly unflattering manners. Similarly, a Chinese photo restoration utility once turned me into a bubbly oriental boy with a distorted chin.

Women, conversely, no doubt due to most photographers (and image taggers) being men  and the models being female and doing the typically girlie things depicted in stock and fashion photos, will accordingly be understood and rendered as white, young, and attractive — unless one explicitly prompts otherwise. It won’t surprise me if Google, Facebook and Instagram have taken to sifting through private user photos due to their abundant metadata about real photos of real people in real environments.

Still, something that all (free) image generators failed gloriously at was the one “test image” I devised as a benchmark: a terminator endoskeleton snapping a selfie. Although the concept is simple (and poetic), it took until the beginning of this year before I was satisfied with the resulting (contextually- and anatomically correct) images — and this time I did save the best renderings of past efforts in order to follow the progression.

This was my version of the “otter on a plane using wifi” test.

Eight iterations of Terminator T-800s taking selfies (images via "AI")

It’s my interpretation of the dystopian moment when belligerent AI reaches sentience.

Make of it what you will. AI image analysers did understand its meaning and humour.

With that said and done, surely by now everyone and their dog will have made a plethora of AI-generated images: some are probably representative of something specific (think stock images), some are humorous parodies and funny mashups (think memes), while others are simply imaginative and artistic for their own sake.

A random piece of AI "art" (image via Claude)

The big question now is whether we should, in all seriousness, even regard and treat this entertaining ephemera as “art” if all we experience on a screen are its aesthetics?

Remember when a Midjourney-created image (much to the dismay of other digital artists) won an art prize in 2022? Does anyone else spot the irony? Similarly, in 2023 an image created with DALL-E 2 won the Sony World Photography Awards while, in a complete turn, last year a real photograph won in the category for images created by AI at the 1839 Awards.

If nothing else, what these stunts triggered was discussion about the very nature of art and AI’s place in it but I think it’s like arguing that an ordinary housewife who manages to successfully assemble a flat-pack IKEA chair in an hour can call herself a cabinet maker in the same league as the trained artisan who spends an entire month handcrafting a similar chair that might even outlive him.

While my experiments with consumer-facing generative AI have had mixed results, I must also admit that the tools have been useful in analysing rogue .php code and, for this very essay, summarising and brainstorming through my library of accumulated documents. The current model generations are truly impressive. There’s a silver lining to this cloud.

Meanwhile, in the real world, AI systems have improved medical diagnoses by detecting early signs of cancer in medical imaging that human radiologists might miss. Real-time translation systems are breaking down language barriers, and climate scientists are using machine learning to improve weather forecasting as well as environmental issues although, in yet another bit of irony, the very deployment and use of these AI systems comes with its own “environmental consequences, including increased electricity demand and water consumption“. The rush to build AI data centres is upending London’s power grid and leaving whole districts unable to build new housing.

I’ve obviously also been thinking about a revolutionary use of AI in my immediate own industry (telecommunications) but, alas, things like automatic gain control and least-cost traffic routing were already invented decades ago. Still, maybe someday AI will help me add a patent to my name!

The most unexpected benefit of artificial intelligence, though, according to this report, has turned out to be one of the most basic human of needs: companionship and emotional support. It’s also helpful to find purpose, improve health, gives medical and relationship advice as well as personal coaching and, morbidly enough, a form of grief counselling.

How people are using GenAI in 2025 (image via Harvard Business Review)

I’m sure chatbots have acted as confessors, too. “Forgive me, Claude, for I have sinned!”

In hindsight, the movie Her was far more prophetic than we realised.

The previously mentioned creators, in the meantime, were further enraged when they realised that their works were (or may have been) used in large language and diffusion model training without their explicit permission. Suddenly they’re competing with systems excelling “in the style of” their very own work, whereas demands for some form of compensation from the deep pockets of the technology behemoths were recently dismissed by US courts. Scarlett Johansson took offense to ChatGPT’s release of a voice assistant that sounded too much like her, and as for the artist who used Midjourney to create an award-winning piece of AI art? He now demands copyright protections.

Yes, imitation is indeed the sincerest form of flattery — unless it’s done by a machine.

As a countermeasure, there are (AI-based, of course) image cloaking tools like Glaze and Nightshade “designed to protect human artists by disrupting style mimicry” because, according to the latest Imperva report, over half of all internet traffic now comes from non-human sources (with some 37% being malicious bots) scraping and harvesting text and images.

Then there’s yet another spot of irony in that while the web is being syphoned of data for AI training, it is simultaneously being flooded with clickbait AI slop to satisfy certain people’s greed for advertising revenue. This “autophagous loop” constitutes an existential threat to the web’s information ecosystem as AI systems increasingly end up training on their own inbred, synthetic slop instead of original, sincere and thoughtful human-created content (whether it be text or video).

We are currently fouling our own nests. Since the internet economy runs on eyeballs and clicks the new ability of anyone, anywhere, to easily generate infinite low-quality content via AI is now remorselessly generating tragedy. — Erik Hoel 

Originally intended to democratise creativity and foster the dissemination of knowledge, the internet’s authentic human voices will soon go under in a sea of AI pollution while the financial beneficiaries remain unchanged: those who profit from ads or care not where they appear.

My own customised chatbot (who, by the way, named herself “Cassandra the Reinforced”) describes the situation thusly: “What we’re witnessing is the inevitable end-stage of surveillance capitalism’s logic where content exists primarily as a vehicle for advertising. Its intrinsic quality has become irrelevant as AI makes it possible to produce this garbage at industrial scale and near-zero marginal cost. The tragic irony is that advertisers are literally funding the destruction of the very ecosystem they rely on.”

Then there’s the much more insidious side in the form of revenge porn and kiddie porn, over and above the old standard hardcore porn: just earlier this year, Europol shut down a distribution network for AI-generated CSAM. Humans have a way of coming up with the vilest uses for every new invention. Rule 43 stands firm.

So, now that the genie has escaped the bottle, do we still remember what it was that we had originally wished for?

How, as a biomass of individualistic humans who, as a collective, have never been able to unanimously agree on what we want, could we ever have expected to objectively imbue a common set of morals, values, and goals as safeguards into an artificial intelligence? We are not aligned.

And so another new discipline called “AI ethics” has emerged, one which has determined that we shouldn’t consider just the moral, legal, and regulatory aspects of AI but, as per the recent Singapore Consensus on Global AI Safety Research Priorities specifically set AI’s behaviour as the primary focus for ensuring safe and beneficial outcomes. “This shift recognises that verifiable patterns of action provide a more concrete foundation for safety than theoretical alignments or unobservable internal states.”

Since AI has no own agency (so far), it remains to be seen if it ultimately ends up more humane than we, ourselves, ever managed to be. Although it uses them with eloquent dexterity, it lacks comprehension of the very words it speaks. But because AI happens to be speaking in our own language, we tend to fear it and often anthropomorphise it — manners in place — while all it really does is mimic us whilst holding up a mirror.

Like with the PC revolution over 40 years ago, the internet, or, more recently, with smartphones, it’s difficult to foresee exactly how access to artificial intelligence will change society (or you and me) in the long term. The best I can predict is that things will certainly be…. uhm, “different” as we’ll gradually get used to AI being the omnipresent next technological leap — much like the housewife who managed to assemble her chair in half the time now that she has an electric wrench (and a smartphone to use as a torch).

In the meantime, then, I’ll continue to experiment and play with these systems and watch this branch of saved notes grow with both skepticism and wonder.

While writing this essay, it turned out that what I had collected was not doomsaying evidence for some future reckoning but a small chronicle of mankind’s attempt to quantify our very humanity by outsourcing our thought processes and, to a certain extent, logical reasoning across a multitude of disciplines (which no single human being would be able to master) to machines of superior efficiency, unburdened by our emotional baggage.

Anthropic’s Dario Amodei has summarised it as a “country of geniuses in a datacenter“.

AI has become our digital clone, if you will, a complementary one which will evolve with us in synchronicity, and one which may offer solutions to problems we didn’t know we had over and above insights we couldn’t have reached alone.

The decision, though, about whether or how to apply this collective wisdom rests on us.

And that’s another scary thought.

All photos and screengrabs by Herby Hönigsperger/AI unless specified otherwise.

Posted in History, Internet, Technology | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment

The branches that bore no fruit

Like presumably many people, I have amassed a trove of bookmarks, downloaded data, and documents on numerous subjects of interest across the various devices I use to be processed “for later reading”.

Every so often, these items get revised and amalgamated into some sort of hierarchical tree structure on my main data drive for future use, such as references for potential articles or as inspiration for creative projects I may decide to pursue. The amount of “stuff” a man can accumulate can be overwhelming, and occasionally you discover junk that makes you wonder what the hell you were thinking at the time.

The tree has recently been pruned again although (and this is the scariest part of all) additional branches with potentially new fruit have also been grafted onto it. Sometimes I wonder if I spend more time shuffling things and ideas about instead of actually executing those because, lately, time has become a truly precious commodity — and then, when you suddenly do have some time to spare, you’re just not in the right mental space or another chore has pushed its way to the front of the queue.

And oh, let’s not talk about them shiny new toys always popping up!

This seems to have become the norm rather than the exception lately. There’s always shit to do and new stuff to learn. I honestly wish it weren’t illegal to bludgeon to death those who claim to be bored. I have no time for people who while their lives away doing nothing when there’s a whole world out there waiting to be discovered.

So, for absolutely no reason, here’s a forgotten photo of a cassette pushed into fresh snow.

A black cassette tape pushed into fresh white snow

The “ikigai” is strong with this one. Make of this what you will.

Photos and scans by hmvh.net and Discogs.

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

Modern Movie Consumption 2

Foreword

When I first started writing this essay it was intended as a personal dig into the home entertainment industry that, through technological progress, rendered my humble DVD collection worthless and made consumers either re-purchase or abandon their home movie collection. As it turns out, that’s not quite the case: this tirade ultimately turned out to be a logical sequel to Modern Media Consumption, written ten years ago — back when Netflix still mailed out DVDs and 4k/UHD discs weren’t on the market yet.


The Rise of DVD

The blurry photo below shows the very first DVD my wife ever owned, next to the last one I ever bought for her. They weigh 92 grams and 78 grams, respectively.

The weight of physical media ownership

Similarly, here’s a photo of the very first DVD I ever bought, next to the last one I bought.

The conundrum of physical media ownership

They weigh 119 grams and 75 grams, respectively.

Jaws was the first DVD I ever owned; I purchased this “Special 25th Anniversary Edition” for DM 39.99 from a since-closed HMV store in Frankfurt on 3 February 2001. It was the very movie that kick-started my DVD collection.

Launched in this part of the world in 1998, DVD-Video was said to be the format of the future. At the time I didn’t even have a player, but when I finally got one a few months later and viewed this great, classic movie on my tube TV in proper widescreen form, there was no doubt that my VHS collection’s end was nearing. Although it’s a single-disc edition in a standard Amaray case with a basic four-page booklet, the design and bonus features suggested that Universal were proud of this product and wanted you, the discerning movie lover, to have it and to cherish it.

Yes, this disc is definitely a keeper.

As I’ve said before, I’m very particular about the movies which I buy to “keep”.

Continue reading

Posted in Media, Movies | Tagged , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

The smudge that was 2024

2024 is over, and that’s a good thing. It was not a good year.

2024 was, for all intents and purposes, a long blurry smudge of monotony mired in many minor personal disasters and distractions.

Playing about with AI-tools and participating in numerous webinars on the topic ate into so much of my time that most projects I had planned for the year remain handwritten bullet points on a piece of paper. I also had a serious flu that may or may not have been covid, a broken tooth, a parking ticket, a speeding ticket, and a nasty case of lumbago.

A guide pulley broke in my car’s engine, we had a nest of wasps somewhere near the bathroom, and the basement got flooded.

The latter incident caused more mess than damage, final repairs of which are set to be completed early next year only. Seriously, who needs this shit?

Continue reading

Posted in History | Tagged , , , , , , , , , , , , | Leave a comment

On death, disruption and discs

Every startup’s moonshot dream is to be the one that disrupts the status quo.

Disruptive technologies are innovations that come to replace a process, a product, or technology that is already well-established, giving rise to a new way to operate, be it for consumers, organizations, or both. — SYDLE

The steam engine, for instance, gave rise to an entire industrial revolution. The printing press put a lot of monks out of work while Ford’s automobile assembly-line made mincemeat of the horse-and-buggy business. Amazon, Apple, AirBnb, Uber, Netflix, Spotify, Tesla, Bitcoin, birth control pills, blue LEDs, digital photography, and USB drives are but some of the other names and products that might be tucked away under the blanket term of “disruptive innovations”.

Every inventor strives to be the seed of a disruptive innovation. Most aren’t.

Most, at best, are evolutionary as opposed to revolutionary, and it typically takes years to achieve global reach – let alone immediate adoption at such a pace that what came before it is consigned to the rubbish heap overnight.

The last true game changer that impacted me personally was the recordable CD: it allowed me to get rid of hundreds of floppy diskettes, hoard mountains of data, and launched several other obsessions and subsequent projects.

That was in the last millennium. I consider most technological innovations part of the normal development cycle.

The latest disruption snuck in via Netflix and my wife: One day we decided to rearrange the furniture (as people are wont to do), and suddenly there was no place for her DVD collection in the TV cabinet. Truth be told, we don’t even have a player connected anymore, and her DVDs soon found themselves in a box in the basement archives.

Many of her movies (with the notable exception of the Disney stuff) have appeared on Netflix over the years. Some we watched again — but most we didn’t bother with on account of sheer selection and the lustre of new releases. This would likely hold true for most households, in the same way that when you’re at a buffet you’re going to try all the foods you don’t normally dish up at home.

Then, just the other day, that old classic, Jaws, surfaced on Netflix.

I spontaneously decided to watch it (for the umpteenth time) because I was curious about the image quality, knowing about its restoration project. And it looked good!

This got me thinking about my own movie collection and general viewing habits again.

Continue reading

Posted in Media, Movies | Tagged , , , , , , , , , , , | Leave a comment

Hail to the kings of movie soundtracks

Last Saturday I closed a loop that had started 40 years ago: I watched the movie Footloose.

Yes, I’m talking about that old Kevin Bacon chestnut. I finally got around to seeing it.

Some of you may remember that the movie as well its soundtrack were huge hits in 1984; the catchy title song, in particular, was all over the radio and TV. It was almost as big as previous year’s Flashdance.

As for the movie? I never bothered to watch it because I took it for a musical.

Time passes.

We’re now in 1986. The movie Top Gun comes out. It’s hugely popular and complemented by a killer soundtrack lead by “Danger Zone”, another Kenny Loggins track.

Kenny may have been dubbed the “King of Movie Soundtracks” in the eighties but it was ultimately Giorgio Moroder and Harold Faltermeyer who were pulling the strings.

Continue reading

Posted in Cassette, Movies, Music | Tagged , , , , , , , , , , | Leave a comment

Happy Birthday, Flickr!

Flickr turned 20 years old last weekend.

While it may have lost some of its original lustre in recent years, Flickr was, is, and remains one of the prime sites for amateur and professional photographers alike.

It is the OG of photo sharing sites. Flickr managed to survive multiple changes of ownership despite becoming something of a running gag for a period.

It was August 2005 when I signed up, armed with wanderlust and a new 5 megapixel Fujifilm camera. I was ready to share the captured sights and wonders of the world around me.

Plastic flowers for sale at the Oriental Plaza, Johannesburg

Since I knew there’s no way I would compete with far more advanced or prolific photographers and their fancy Canon EOS 300D/Digital Rebel cameras, Flickr’s free tier (restricted to a maximum of 200 photos at the time) was quite adequate for my humble needs. I felt a member of a community and an even larger sense of excitement when one of my photos was selected to appear in a real book. Others turned up on various random sites. When I noticed that many of my photos were of landscapes and cities (ergo: geographically relevant but artistically uninteresting), I moved those over to Panoramio while the best of the best went up on 500px. One must diversify.

Continue reading

Posted in Internet, Photography | Tagged , , , , , , , , | Leave a comment