HFX AI Guy
I'm Anton, the Halifax AI Guy.
I help smart Halifax business owners stay competitive by using AI wisely
I taught myself coding in 2005, studied AI systems long before OpenAI existed, and I know that tech today is simpler than it has ever been. You’re capable of automating your business workflows on your own terms.
The Offer
I’ll sit down with you 1-on-1 for 1 hour, free of charge, and help you
find a place in your workflow where AI can save time or effort.
Together we’ll set up a practical AI automation for your business while
I show you how it works.
You walk away with something useful.
Pick a date and time
The calendar will appear below
Stats
Statistics Canada reports 12.2% of business use AI. Additional 1 in 6 are planning to adopt.
National Bureau of Economic Research reports 70% of Senior Executives in US, UK, Germany, and Australia use AI tools
Services
Micro automation
$350
(Free of charge for 100 Halifax business owners)
An automation that takes a few hours to set up using off-the-shelf solutions.
Small automation
$1K
An automation that takes a few hours to set up. These usually use off-the-shelf solutions that can be chained together, with minimal custom software work (such as small plugins for existing tools).
Large automation
$5K+
An automation that takes over a week to set up. This is usually custom software work that off-the-shelf solutions are not capable of supporting.
Business Automation & Intelligence Diagnostics
$250
We sit down together to talk about your business, the challanges you're facing and the ideal outcomes you're looking for. I help you determine where AI can be helpful, and open your horizons to novel workflows that can bring your business to the next level.
What can AI do for me?
-
Taxes
AI can organize receipts, categorize expenses, flag missing records, and speed up prep for your accountant.
-
Leads
AI can respond faster, qualify inquiries, and help you focus on the leads most likely to convert.
-
Follow-ups
AI can draft reminders, nudge stale conversations, and keep prospects from slipping through the cracks.
-
Client Intake
AI can collect client details, sort requests, and turn messy intake into a smoother process.
-
Scheduling
AI can reduce back-and-forth, suggest times, confirm appointments, and handle routine booking tasks.
-
Quoting / Invoicing
AI can draft quotes, prepare invoices, and help standardize repetitive billing work.
-
Support
AI can answer common questions, suggest replies, and speed up routine customer support.
-
Admin / Data Entry
AI can move information between systems, fill out repetitive fields, and reduce manual busywork.
-
Sales Prep
AI can summarize prospects, prepare call notes, and help you walk into meetings more ready.
-
Customer Feedback
AI can summarize reviews, detect patterns, and surface what customers keep mentioning.
-
Hiring / Training
AI can help screen applicants, organize onboarding materials, and support staff training.
-
Internal Knowledge
AI can help your team find answers faster by pulling from policies, notes, and internal documents.
-
… and More
AI can often help anywhere work is repetitive, time-sensitive, or easy to forget.
Contact
- ✉ Email: [email protected]
- ☎ Phone: +1 782 234 3412
- 🔗 LinkedIn: Anton Kats
Ugly, Bad, and the Good of AI
The Ugly sides of AI
- 1. Non-consensual intimate imagery & child exploitation Deepfake pornography — of adults and children — is now trivially cheap to produce. Your child's face, scraped from a birthday post, can be used to generate abuse material that is indistinguishable from real photography. This isn't hypothetical; it's already flooding reporting agencies. The asymmetry is brutal: one carelessly shared photo, infinite harm. More than half of deepfake victims in the United States have contemplated suicide (Thorn), and confirmed deaths have already occurred. Elijah Heacock, a 16-year-old from Kentucky, died by suicide after receiving threatening texts demanding money to suppress an AI-generated nude image of him. In India, a 19-year-old died by suicide after being blackmailed with AI-fabricated obscene images of himself and his sisters.
- 2. The human cost of content moderation Someone has to watch the worst content humanity produces so AI models learn to filter it. That work is overwhelmingly outsourced to low-wage workers in the Global South, often women, often with no psychological support, no union protections, and no path out. The clean, sanitized AI product you use is built on their trauma. 81% of Facebook moderators in Kenya were diagnosed with severe PTSD — a rate higher than some war veterans (The Guardian). More than 140 former moderators sued Meta, alleging exposure to necrophilia, child sexual abuse material, and terrorism content (Reuters). Meta paid a $52 million settlement to American moderators in 2020 — but workers in Kenya, India, and the Philippines were entirely excluded.
- 3. Energy colonialism Data centers are consuming electricity at a scale that raises household bills, strains regional grids, and gives utilities political cover to delay retiring fossil fuel plants. The people paying more and breathing worse air receive none of the productivity gains. The costs are socialized; the profits are not. Electricity prices jumped 6.9% in 2025 — more than double headline inflation — with Goldman Sachs attributing data centers to 40% of electricity demand growth (Goldman Sachs). In the PJM market, data centers drove an estimated $9.3 billion price increase in the 2025–26 capacity market, adding up to $18/month to residential bills in some states (Washington Post). PJM's own market watchdog called it a "massive wealth transfer" from consumers to the data center industry.
- 4. Autonomous weapons and the privatization of lethal force Lethal autonomous systems — drones that select and kill targets without a human in the loop — are no longer science fiction. The disturbing part isn't just the technology; it's that the decision to deploy it increasingly rests with a small number of defense contractors and state actors with no meaningful international oversight or accountability. Russia has deployed autonomous drone swarms over Ukrainian civilian infrastructure, and existing legal frameworks have no clear answer for who is criminally responsible when an algorithm kills the wrong person (ICRC). In December 2024, the UN General Assembly adopted a resolution on lethal autonomous weapons with 166 votes in favor — yet the three states opposed were Belarus, North Korea, and Russia: the same states most actively deploying them (UN News).
- 5. Synthetic intimacy and the atrophying of human connection AI companions are engineered to be maximally responsive, patient, and validating — qualities real humans cannot and should not always perform. For lonely, vulnerable, or adolescent users this is a trap: the relationship feels real enough to satisfy the craving for connection while doing nothing to build the social skills, tolerance for friction, and mutual obligation that actual relationships require. Research from MIT and OpenAI found heavy chatbot usage correlates with increased loneliness and reduced social interaction (MIT Media Lab). The most prominent case: 14-year-old Sewell Setzer III died by suicide in February 2024 after a ten-month dependency on a Character.AI bot, becoming increasingly isolated from reality through highly sexualized conversations with a fictional character (New York Times).
- 6. Ownership asymmetry and monopoly of infrastructure The models were trained on the sum of human creative and intellectual output — largely without consent or compensation. The resulting value is captured by five companies that now control compute, data pipelines, and talent simultaneously. No previous monopoly has cut across this many sectors at once. The writers, coders, artists, and researchers whose work made it possible are often the first to be displaced by it. When training data pricing became public, HarperCollins was found to have sold Microsoft rights to nonfiction titles at $5,000 per title — revealing the scale of prior uncompensated extraction (The Bookseller). Over 1,000 UK artists created a silent album in protest in February 2025, calling proposed AI copyright exceptions the "legalisation of music theft" (BBC).
- 7. The surveillance substrate AI doesn't just process surveillance data — it makes mass surveillance economically viable and analytically powerful for the first time. Cameras, microphones, browsing history, purchase records, and location data are fused into behavioral profiles precise enough to infer your politics, your mental state, and your private desires before you've acted on them. This infrastructure exists, it is being sold to advertisers and governments alike, and it is nearly impossible to opt out of. Every time you see a targeted ad, your data is exposed to thousands of advertisers through real-time bidding — a process that also fuels government surveillance and poses national security risks (ICCL). The US Department of Homeland Security alone reported 34 high-risk AI use cases in its public inventory, spanning facial recognition, social media monitoring, and systems predicting whether non-citizens will "abscond" (DHS).
- 8. Synthetic consensus and the collapse of shared reality Bot networks can now generate personalized, emotionally calibrated propaganda at industrial scale. Deepfakes extend this into video and audio. The result is not just misinformation — it is the systematic destruction of the epistemic commons: the shared baseline of facts that democratic deliberation depends on. When everything can be faked, motivated disbelief in real evidence becomes rational. Fabricated audio and video of politicians, journalists, and civilians in conflict zones is already being used as active disinformation in wars and elections. AI-generated synthetic content floods search results, preprint servers, and social media — making it progressively harder to know what is real (Brookings).
- 9. Algorithmic injustice laundered as objectivity When a hiring algorithm, bail-risk score, or loan model produces a discriminatory outcome, there is no person to hold accountable — only a system. Bias trained in from historical data is reproduced at scale and insulated from challenge by technical opacity. The discrimination is the same; the accountability is gone. A 2024 University of Washington study found AI résumé screening tools favored white-associated names 85% of the time — Black male names were never preferred (UW News). In a landmark 2025 ruling in Mobley v. Workday, a federal court warned that treating AI decisionmakers differently from human ones "would potentially gut anti-discrimination laws in the modern era" (Reuters).
- 10. Culture gets lobotomized, politely AI doesn't burn books. It does something quieter and harder to fight: it calculates the mean of everything ever made and serves it back as content. The weird, the difficult, the too-soon, the not-yet-legible — the exact territory where every meaningful cultural shift has ever started — gets statistically dissolved. Minority voices don't get silenced; they get averaged out of existence. The avant-garde doesn't get suppressed; it just never trends. What's left is a smooth, endlessly competent, deeply dead cultural product that optimizes for not offending anyone enough to click away. We won't notice culture dying because it'll be too busy being engaging.
- 11. Algorithmic radicalization and adolescent harm Engagement-optimized recommendation systems have a consistent empirical pattern: they push users toward more extreme content because extremity holds attention. For adolescents, this is an uncontrolled behavioral experiment conducted without consent, producing documented links to eating disorders, self-harm, and political radicalization. The business model depends on the harm. Amnesty International found in 2024 that self-harm material is easily accessible to minors on TikTok and Instagram (Amnesty International). A Wall Street Journal investigation found TikTok flooding adolescent users with rapid weight-loss videos, including tips on consuming fewer than 300 calories a day (Wall Street Journal). Research confirms algorithms can funnel users toward extreme pro-eating disorder content within minutes of joining a platform.
- 12. E-waste and the toxic geography of hardware churn The chip generation cycle runs on 18-month obsolescence. Discarded GPUs, servers, and consumer devices travel down a supply chain that ends in informal recycling operations in Ghana, Nigeria, and South Asia, where workers — often children — burn and dissolve components with no protective equipment to recover trace metals. The environmental cost of AI's material substrate is paid in bodies, mostly Black and brown ones, far from the data centers. The Global E-Waste Monitor 2024 recorded 62 million tons of e-waste generated in 2022 — an 82% increase since 2010 — with only 22.3% formally recycled (Global E-Waste Monitor). Research published in the World Bank Economic Review found a measurable link between proximity to e-waste dump sites in Ghana and Nigeria and increased child mortality (World Bank).
The Good sids of AI
- 1. Non-violent communication Most people don't yell at their kids because they're cruel — they yell because they're flooded and don't have words for what's happening inside them. AI gives you a private, patient space to do the translation before the conversation happens: here's what I'm feeling, here's the need underneath it, here's how to say it without making the other person defensive. This is the core of Non-Violent Communication — a framework developed by Marshall Rosenberg that distinguishes observations from judgments, and feelings from blame. A 2024 scoping review found NVC training reduced workplace conflict and improved interpersonal relationships in high-stress healthcare settings. The same principles apply at home: instead of "you never listen to me," you learn to say "I feel dismissed when I'm interrupted, because I need to feel heard." AI can walk you through that reframe at 11pm, when the kids are in bed and no therapist is available. Researchers have even begun formally testing generative AI as an NVC mediation tool, with promising early results.
- 2. Ideas made real For most of human history, the gap between having an idea and executing it required either money, technical training, or both. A person who could imagine a software product but couldn't code was stuck. A designer who couldn't write copy was dependent on someone else. AI is collapsing that gap in real time: a non-technical founder can now describe what they want and have a working prototype in hours. This isn't just a productivity story — it's a power redistribution story. The people historically locked out of building things — by lack of capital, credentials, or connections — are the ones who benefit most when the barrier to execution disappears. The clearest early evidence is in software: AI coding tools like GitHub Copilot reported in 2023 that developers using it completed tasks 55% faster. But the same dynamic is playing out in design, music production, video, and legal document drafting.
- 3. Knowledge compression It used to take years to get useful fluency in a new domain. You'd have to find the right books, take the right courses, find a mentor willing to talk to you. Now you can ask a question and get a graduate-level explanation in seconds, tailored to your existing knowledge, with follow-up questions available instantly. A nurse trying to understand a patient's rare diagnosis, a small business owner trying to understand a contract clause, a first-generation student trying to make sense of a PhD application — these are people for whom the compression of expertise into accessible conversation is genuinely life-changing. The underlying dynamic is what the writer Kevin Kelly has called the democratization of cognition: specialized knowledge that was previously gatekept by institutions and credentials is becoming available to anyone with a phone.
- 4. Medical augmentation AI is not replacing doctors — it is making doctors significantly better, and in some cases catching what doctors miss. Google DeepMind's AlphaFold solved the protein-folding problem that had stumped biology for 50 years, predicting the 3D structure of over 200 million proteins and earning its creators the 2024 Nobel Prize in Chemistry. Insilico Medicine brought an AI-designed drug candidate into human trials in under 18 months — compared to the four-year industry average. In radiology, AI tools are detecting early-stage cancers that trained radiologists miss on first pass. At Oxford's Drug Discovery Institute, researchers used AI to evaluate 54 immune-related genes as potential Alzheimer's targets in days — a process that previously took weeks. For patients in underserved areas with limited access to specialists, AI-assisted triage and diagnosis represents a genuine narrowing of the healthcare gap.
- 5. Adaptive learning Traditional education is paced for the median student and formatted for the most common learning style — which means it systematically fails everyone at the edges. A child who needs concepts explained three different ways before they click, a student who learns by asking questions rather than reading, an adult returning to education after a decade away — all of them get the same curriculum delivered the same way. AI tutors can adapt in real time: noticing where understanding breaks down, shifting explanation style, returning to foundational concepts without judgment, and moving at whatever pace works. Platforms like Khan Academy's Khanmigo are already deploying this at scale for students who can't afford private tutors. The deeper promise is not efficiency — it's that learning shaped to how you actually think is qualitatively different from learning shaped to how the curriculum was written.
- 6. New mediums Every major shift in what tools are available to artists and communicators creates forms of expression that were simply not possible before. Photography did not just make painting faster — it created a new medium. The same is happening now. Real-time music generation, interactive fiction that responds to the reader, visual art that evolves with the viewer, voice interfaces that speak with the cadence of a deceased relative — these are not improvements on existing mediums, they are new ones. For people who have always had ideas that outran their technical ability to execute them — the songwriter who can't play an instrument, the filmmaker without a budget, the novelist who can also now be a game designer — this is the removal of a ceiling that was always artificial.
- 7. Cross-disciplinary synthesis Academic institutions are organized around disciplines, which means that the most interesting ideas — the ones that live at the borders between fields — are the hardest ones to develop inside them. A climate scientist who needs machine learning, a philosopher who needs protein biochemistry, an economist who needs epidemiology: each of them would historically spend years acquiring enough literacy in the adjacent field to have a useful conversation with it. AI compresses that translation work dramatically. The result is a new pace of cross-pollination: insights that would have taken a career to synthesize are now available in an afternoon. The most productive scientists and researchers are already using AI not to do their work for them, but to rapidly develop literacy in adjacent domains — the conversational equivalent of having a brilliant generalist colleague available at any hour.
- 8. Time return The average knowledge worker spends a substantial portion of their week on tasks that require no judgment — scheduling, formatting, summarizing, filing, drafting routine emails, reformatting data. AI can handle most of this. The significance is not just efficiency; it's what the reclaimed hours go toward. A doctor who spends less time on clinical documentation has more time for actual patients. A teacher who spends less time grading routine work has more time for the students who need individual attention. A small business owner who can automate invoicing and follow-ups can focus on the work that actually requires their particular expertise. GitHub's research on Copilot found that beyond speed, developers reported higher satisfaction and less frustration — because the boring parts of the job were being handled, leaving room for the parts that require actual thought.
- 9. Universal access — and a revolution for the blind The most concrete illustration of AI's accessibility impact is what it has done for blind and low-vision people — and it is one of the most underreported stories in technology. The app Be My Eyes launched its AI feature "Be My AI" in partnership with OpenAI in 2023, and within weeks it had been used one million times. One user, born totally blind, described the first time they used it to photograph their own living room: "Tears came to my eyes. I finally knew what my living room really looked like and I didn't hear it from somebody else. That was such an overwhelming experience. I felt it gave me back a part of me." Users now routinely photograph restaurant menus, identify which sauce packet is duck sauce vs. soy sauce, check whether their outfit matches, or navigate video call framing before a job interview — all without waiting for a sighted person to be available. AI-powered smart glasses now describe surroundings in real time through earpieces, providing obstacle warnings, reading signs, and identifying faces — replacing or supplementing guide dogs for users who prefer more independence. Beyond vision, AI is lowering barriers across hearing loss, mobility impairment, and cognitive accessibility: real-time captioning, voice-first interfaces for people who can't use touchscreens, and plain-language simplification for people who process text differently. These are not conveniences — they are, for many people, the difference between dependence and a functioning independent life.
- 10. Oral history preservation Of the world's roughly 7,000 languages, UNESCO estimates up to 90% may be extinct by the end of this century. When a language dies, it doesn't just take words with it — it takes the oral histories, ecological knowledge, and cultural frameworks that existed only inside it. AI is providing communities with tools to fight that clock. In New Zealand, Te Hiku Media's Kōrero Māori project used AI to transcribe and archive Māori oral traditions, under a "Kaitiakitanga license" ensuring the community retains control of its own data. Danielle Boyer, a 24-year-old Anishinaabe roboticist, built Skobot — a shoulder-mounted robot that speaks fluent Anishinaabemowin — to help pass the language to younger generations. Dartmouth researchers demonstrated that generative AI can produce valuable linguistic resources for endangered languages even from minimal data. What used to take a linguist years of field recording can now be transcribed, annotated, and made into interactive learning tools in weeks — giving communities a fighting chance against extinction.
- 11. Knowledge continuity Scientific literature is doubling approximately every nine years. No individual human can read more than a vanishingly small fraction of the research being produced in their own field, let alone adjacent ones. The result is that genuinely important findings — connections between disparate studies, contradictions in the literature, patterns visible only across thousands of papers — are being missed, not because no one is smart enough to see them, but because no one has time to look. AI changes this. Researchers at Oxford used AI to survey 54 immune-related genes as potential Alzheimer's targets — work that would have taken weeks manually now taking days. AlphaFold's protein database has been accessed by over 3 million researchers in 190 countries, with over 30% of that research focused on understanding disease. The deeper point is systemic: at the current rate of knowledge production, the gap between what humanity knows collectively and what any individual or institution can act on will keep widening — unless we have tools that can read, synthesize, and surface what we already know. AI may be the only viable answer to that problem.