Open for Work

Harmonising life through systematic thinking.

Systematic approaches to complex problems through AI, automation, and thoughtful analysis.

01

WHAT I DO

See All

The Problem

Your team already pays for tools with AI capabilities they're not using. ClickUp Brain, Notion AI, Microsoft 365 Copilot—these features remain dormant while your people spend hours on tasks these tools could assist with.

Our Approach

  1. Audit current subscriptions for dormant AI capabilities
  2. Configure features and create team-specific prompts
  3. Monitor adoption and refine based on what works

Works Well When

Not Appropriate When

Works Well When

  • You already subscribe to platforms with AI features
  • Your team performs repetitive administrative tasks
  • You have at least one person willing to champion adoption

Not Appropriate When

  • You need custom AI capabilities not covered by existing platforms
  • Your team strongly resists technology changes
Honest Assessment
Vendor claims tend toward the optimistic. Independent research supports meaningful but more modest gains. Actual results depend heavily on task type and adoption discipline.

Engagement Details

  • Duration: 8 weeks
  • Your Involvement: Varies
  • Prerequisites: None

The Problem

Your customer service team spends time on tasks AI could handle: drafting initial responses, summarising conversation history, categorising requests. Each repetitive email drains capacity that could go toward complex problems.

Our Approach

  1. Map communication workflows and identify pattern-based messages
  2. Configure AI features with human review gates
  3. Monitor quality metrics and adjust AI behaviour

Works Well When

Not Appropriate When

Works Well When

  • You use Front, Zendesk, Intercom, or Freshdesk
  • You handle 50+ customer interactions daily
  • Many interactions follow predictable patterns

Not Appropriate When

  • Your communications are mostly complex, unique situations
  • You handle highly sensitive matters (legal, medical, crisis)
Honest Assessment
Customer service AI has stronger evidence than most enterprise AI categories. Benefits concentrate in high-volume, pattern-based communications. Complex or emotionally sensitive interactions still require human handling.

Engagement Details

  • Duration: 8 weeks
  • Your Involvement: Varies
  • Prerequisites: None

The Problem

Your analysts spend more time finding problems than solving them. By the time someone notices a metric drift, the damage is done. Manual report building means insights arrive too late to matter.

Our Approach

  1. Connect to your data warehouse or BI platform
  2. Configure statistical anomaly detection with business-relevant thresholds
  3. Build automated alert and reporting pipelines

Works Well When

Not Appropriate When

Works Well When

  • You have 12+ months of clean historical data
  • You use BigQuery, Power BI, Looker, or Tableau
  • Your analysts spend significant time on routine monitoring

Not Appropriate When

  • Your data is fragmented across disconnected systems
  • You lack baseline metrics to detect anomalies against
Honest Assessment
Anomaly detection is well-established. The challenge is tuning thresholds to avoid alert fatigue. Early-stage companies often lack the data history needed for meaningful baselines.

Engagement Details

  • Duration: 8 weeks
  • Your Involvement: Varies
  • Prerequisites: None

The Problem

Your team's knowledge is scattered across Notion, Google Drive, Confluence, Slack. People spend 20% of their time searching for information. Institutional knowledge lives in individuals, not systems.

Our Approach

  1. Assess content landscape and quality
  2. Implement AI-powered retrieval with source attribution
  3. Establish governance for content maintenance

Works Well When

Not Appropriate When

Works Well When

  • You have significant documentation across multiple platforms
  • Knowledge retrieval is a frequent pain point
  • You can commit to content governance

Not Appropriate When

  • Your documentation is minimal or outdated
  • You lack capacity for ongoing content maintenance
Honest Assessment
Knowledge management AI requires clean, maintained content to work well. Most failures come from poor content hygiene, not technology. We will decline this engagement if you can't commit to governance.

Engagement Details

  • Duration: 12 weeks
  • Your Involvement: Varies
  • Prerequisites: None

The Problem

Manual bid management doesn't scale. By the time you adjust bids based on yesterday's performance, the market has moved. Meanwhile, you're leaving money on the table with suboptimal keyword allocation.

Our Approach

  1. Audit current campaign structure and performance
  2. Implement AI bid strategies with human oversight gates
  3. Build monitoring dashboards with automated alerts

Works Well When

Not Appropriate When

Works Well When

  • You spend £5k+ monthly on advertising
  • You use Google Ads, Meta Ads, or Amazon Advertising
  • You have conversion tracking in place

Not Appropriate When

  • Your ad spend is too low to justify automation investment
  • You lack clear conversion tracking
Honest Assessment
AI bid management works well at scale. Google's own data shows 26% efficiency advantage for automated bidding. But automation amplifies both good and bad decisions—governance matters.

Engagement Details

  • Duration: 10 weeks
  • Your Involvement: Varies
  • Prerequisites: None

The Problem

Content production is a bottleneck. Your team has ideas but limited capacity to execute. First drafts take hours. Editing is inconsistent. Publishing cadence suffers.

Our Approach

  1. Establish content workflows with AI assistance points
  2. Build prompt templates for your brand voice
  3. Create quality review processes

Works Well When

Not Appropriate When

Works Well When

  • You need higher content volume
  • You have clear brand guidelines to work from
  • You can commit human review time

Not Appropriate When

  • You expect AI to replace human content judgment
  • Your content requires deep expertise AI can't replicate
Honest Assessment
AI-assisted content can increase volume 2-3x but quality requires human review. Pure AI content often underperforms in engagement and SEO. Use AI for drafts, not final output.

Engagement Details

  • Duration: 8 weeks
  • Your Involvement: Varies
  • Prerequisites: None

The Problem

Your expertise is valuable but difficult to scale. New hires take months to reach competency. Institutional knowledge leaves when people do. Clients can't access your expertise at 3am.

Our Approach

  1. Knowledge extraction and documentation
  2. Build custom Claude Projects or GPT assistants
  3. Implement quality assurance and feedback loops

Works Well When

Not Appropriate When

Works Well When

  • You have codifiable expertise worth preserving
  • You can invest 8-10 hours/week in knowledge extraction
  • You understand this creates a tool, not a replacement

Not Appropriate When

  • Your expertise is tacit and hard to articulate
  • You expect the AI to fully replicate human judgment
Honest Assessment
Domain-specific AI works when expertise is codifiable. The knowledge extraction phase is the hard part—most of the effort is human, not technical. Set expectations accordingly.

Engagement Details

  • Duration: 12 weeks
  • Your Involvement: 8-10 hours/week
  • Prerequisites: None

The Problem

AI is powerful but isolated. Your AI assistant can't access your CRM, update your project management tool, or query your database. Each integration is custom work.

Our Approach

  1. Assess integration requirements and security constraints
  2. Implement MCP servers for priority systems
  3. Build orchestration layer for multi-tool workflows

Works Well When

Not Appropriate When

Works Well When

  • You need AI to interact with multiple systems
  • You have technical capacity to maintain integrations
  • Your security requirements allow system connections

Not Appropriate When

  • You only need AI for conversation, not action
  • Your compliance requirements prohibit system integration
Honest Assessment
MCP is becoming the standard for AI-to-tool connectivity—IBM, Anthropic, and Linux Foundation are aligned behind it. 2026 is the year this moves from experimental to production-ready.

Engagement Details

  • Duration: 10 weeks
  • Your Involvement: Varies
  • Prerequisites: None

The Problem

Some workflows require AI that can plan, execute, and adapt—not just respond to prompts. Sequential, rule-based automation isn't intelligent enough. You need systems that can reason.

Our Approach

  1. Identify workflows suited to agentic approaches
  2. Design multi-agent architectures with oversight gates
  3. Build monitoring and intervention capabilities

Works Well When

Not Appropriate When

Works Well When

  • You have workflows requiring multi-step reasoning
  • You can accept experimental approaches
  • You have technical capacity for ongoing maintenance

Not Appropriate When

  • You need guaranteed, predictable outcomes
  • You're not comfortable with emerging technology risk
Honest Assessment
62% of organisations are experimenting with agentic AI, but only 11% are in production. This is higher-risk, higher-reward territory. We're explicit about what's experimental.

Engagement Details

  • Duration: 16 weeks
  • Your Involvement: Varies
  • Prerequisites: None

The Problem

AI adoption is happening with or without governance. People are using ChatGPT with company data. No one knows what's being automated. You need guardrails before something goes wrong.

Our Approach

  1. Assess current AI usage and risks
  2. Develop policies for acceptable use, data handling, and oversight
  3. Build monitoring and compliance mechanisms

Works Well When

Not Appropriate When

Works Well When

  • You have regulatory or client requirements for AI governance
  • AI adoption is accelerating without clear guidelines
  • You want to enable adoption safely, not restrict it

Not Appropriate When

  • You're a small team where informal governance suffices
  • You want bureaucracy rather than enablement
Honest Assessment
Governance done wrong becomes an obstacle. Done right, it enables faster, safer adoption. 60% of organisations say responsible AI boosts ROI. The goal is control that accelerates, not restricts.

Engagement Details

  • Duration: 12 weeks
  • Your Involvement: Varies
  • Prerequisites: None

The Problem

You're automating existing workflows instead of redesigning them. The process that made sense with 10 people doesn't make sense with AI. You're digitising inefficiency.

Our Approach

  1. 20 hours of workflow observation before any proposals
  2. First-principles redesign with AI capabilities in mind
  3. Change management support for workflow transition

Works Well When

Not Appropriate When

Works Well When

  • You have a strategic workflow worth redesigning
  • Leadership is open to fundamental change
  • You can commit to proper observation before solutions

Not Appropriate When

  • You want quick automation wins
  • You're unwilling to question existing processes
Honest Assessment
This is the hardest and most valuable service. Most organisations underestimate the change management required. We require 20 hours of observation before proposing any redesign—non-negotiable.

Engagement Details

  • Duration: 20 weeks
  • Your Involvement: Significant
  • Prerequisites: None

The Problem

You have multiple potential AI use cases competing for attention. Vendors pitch solutions. Teams request tools. Without systematic evaluation, you risk spreading resources thin or betting on the wrong use case.

Our Approach

  1. Inventory potential use cases
  2. Apply consistent evaluation criteria
  3. Prioritise and recommend with evidence-based rationale

Works Well When

Not Appropriate When

Works Well When

  • You have multiple potential AI use cases
  • You want objective assessment before committing resources
  • You're willing to deprioritise based on findings

Not Appropriate When

  • You've already committed to a specific direction
  • You want validation for a decision already made
Honest Assessment
Evaluation is cheap compared to wrong implementation. This service takes 3 weeks but can prevent 3 months of wasted effort. The output is a decision framework, not a guarantee of success.

Engagement Details

  • Duration: 3 weeks
  • Your Involvement: Varies
  • Prerequisites: None

The Problem

78% of AI project failures stem from poor human-AI communication, not technology limitations. Your team has access to powerful AI tools but lacks the structured approach to use them effectively.

Individual experimentation produces inconsistent results. Without shared vocabulary and methods, teams can't build on each other's discoveries. Shadow AI usage creates security risks. The gap isn't access to AI—it's capability to use it well.

The Approach

  1. Foundations — Mental models for human-AI collaboration. Understanding context windows, token economics, and why prompt structure matters. The shift from "prompt engineering" to "context engineering."

  2. Patterns — Practical techniques: zero-shot vs. few-shot prompting, chain-of-thought reasoning, role-based approaches, output formatting. Hands-on practice with real work tasks, not generic exercises.

  3. Application — Role-specific workflows: operations (meeting summaries, documentation), marketing (content drafts, campaign ideation), analysis (data interpretation, report generation). Custom prompt templates for your recurring tasks.

00A: AI Fluency Workshop
Half-day intensive for 4–12 participants. Immediate competency and shared vocabulary.
4 hours
00B: AI Enablement Program
4–6 week program: 4 × 90-min sessions + async practice + office hours. Embedded capability.
4–6 weeks
00C: Individual Consultation
1:1 sessions customised to role and challenges. Personal AI fluency and workflow optimisation.
As needed
Claim Source Quality
78% of AI failures stem from human-AI communication issues ProfileTree 2025 Industry Analysis Moderate
Certified prompt engineers command 27% higher wages LinkedIn Job Posting Analysis 2024 Moderate
6-week programs outperform 3-day intensives for retention AI For Business Training Center 2025 Moderate
Only 14% of frontline employees receive AI training Boston Consulting Group 2024 Strong

Fit Assessment

Works Well When

  • Team uses AI ad-hoc without consistent methods
  • Organisation wants AI adoption but lacks internal capability
  • Prior AI projects failed due to user adoption issues
  • Budget allows skills investment before implementation
  • You're building relationship before larger engagement

Not Appropriate When

  • You need specific AI solutions, not general skills
  • Team is too small (1–2 people) for group workshop
  • Organisation has no intention of using AI beyond training
  • You expect workshop alone to transform operations
Honest Assessment
Workshop-style training has moderate evidence for skill development. Single-session workshops provide awareness and vocabulary but won't transform capability alone—real change requires sustained practice. Free resources exist (Anthropic Academy, OpenAI Academy, Coursera), so our value comes from role-specific application and hands-on work with your actual tasks, not generic curriculum. The 4–6 week program format delivers measurably better retention than intensives.

Engagement Details

02

RECENT WORK

See All

An interactive dashboard for tracking life-balance across 10 domains.

A comprehensive AI-driven content pipeline for multi-channel publication.

03

CURRENT THINKING

See All
04

CURRENTLY

Currently

"Building the 6I-Communicator framework and refining the 'Website as View' methodology."