AI & Data

Most people who work with a mentor or coach share things that matter — and I think you should know exactly what happens to that information, without having to ask.

  • The only AI tools that come anywhere near client sessions or client data are the transcription and note-taking features in Google Meet — and those are never on by default. If you'd like to use them, we'll set that up together with your explicit permission. If you'd prefer not to, I'll take notes as best I can during and after the session. I want to be honest that opting out does affect what I can offer — without transcription I'll be working from memory post-session and notes taken in the moment, which means they'll be less complete, though recordings will still be available either way. Neither choice affects the mentoring itself, and neither is the wrong one.

    Outside of sessions, I use Claude (Anthropic) to help with the day-to-day running of the business — research, planning, structuring written materials, drafting content. All of that output is reviewed carefully before it's used. I would never publish or send anything I wouldn't write myself.

    Where you've given permission and it can be done without sharing personal or sensitive information, I may use Claude to help develop and tailor the base resources and frameworks that I then customise further for your situation. In practice this means the most sensitive tailoring work is done without AI assistance — because your data security matters more than my convenience. The parts that can be done with help free up time and keep costs lower. The parts that can't, don't get that shortcut.

    Claude has no access to any account that holds client information — no email, no invoices, no Cliniko data, nothing with your name or personal details attached. It works from sandboxed accounts that contain only general business information.

    I also use a range of other software tools to run the business, some of which have AI features built in. None of these touch client data. I've vetted all software I use against Australian Privacy Principles, and smart features in client-facing accounts — like Gmail — are disabled and reviewed regularly, because it's common for companies to enable these quietly. Where something changes, I'll review it, follow the relevant obligations, and let you know if there's anything I think you should be aware of. Everything is documented in the Privacy Policy.

    I'm also actively looking for opportunities to use more ethical AI tools and locally-run AI that keeps data within my own systems — reducing both privacy risk and my dependence on large commercial providers. That's an ongoing commitment rather than a fixed state, and this page will reflect it as things change.

  • Transcription and AI note-taking in Google Meet are off by default. If you'd like to use them, let me know before our first session and we'll set it up together.

    For resource and framework preparation: if you'd prefer I don't use AI assistance even for the non-sensitive parts of developing materials for you, just let me know. It won't change what you receive, though it will add meaningful time to preparation work.

    Neither choice affects the mentoring itself.

  • I'll be direct: AI doesn't align with my values in a lot of ways. The copyright issues are real and largely unresolved. The environmental cost is significant and frequently understated. And the pace at which companies are integrating AI — into products, into workplaces, into decisions that affect people's careers and livelihoods — often feels like it's moving faster than the consideration it deserves.

    The concern I keep coming back to is what gets lost when something that should be done with care starts to feel like it wasn't. A performance review delivered without evident thought. Feedback on something that really mattered, handled in a way that made the person wonder whether anyone had actually sat with it. Workplace decisions that carried real consequences but arrived feeling like they'd been processed rather than considered. Whether AI was actually involved in those situations often matters less than the feeling that a human wasn't fully present for them — and that feeling is worth taking seriously.

    So why do I use it at all? Because I'm also a disabled, chronically ill sole trader running a practice that genuinely needs to do more than one person can sustainably do alone. For someone managing executive dysfunction and brain fog, having a tool that helps with planning, structure, and the administrative weight of a small business isn't a shortcut — it's what makes it possible to keep going. I use it the way I use any other accommodation: carefully, on my own terms, and only where it genuinely helps without causing harm.

    I hold both of those things at once. AI is a tool I use and a technology I have real concerns about. I don't think that's a contradiction worth resolving into something tidier than it actually is.

    What I can tell you is how I use it here: with clear limits, with your data protected, and with the parts that require genuine human care done by a human. That's the line I hold.