Recently, I’ve been questioning the role and capabilities of AI in various marketing- and content-related situations. So, I asked Perplexity those questions and am sharing the answers:
AI & Departmental Silo Mentality
In a post about the still ongoing issue of “silo mentality” betwixt and between company departments including sales, marketing, finance, etc., the question was “Will AI technology help hasten organizational strategic alignment or create new kinds of tactical silos?” Here are highlights of Perplexity’s answer:
- “AI can do both—and which dominates depends almost entirely on how it’s introduced and governed.”
- “Evidence across research and practice converges on a conditional answer: AI accelerates whatever alignment (or misalignment) you already have.”
- Perplexity then noted one of the key determinants of AI success in terms of helping breakdown silos is a strategy-before-AI approach: “Strategy → AI, not AI → strategy”.
The bold emphases are Perplexity’s, but they are noteworthy — especially the realization that AI is a tactical tool that builds off the strategic inputs (and biases) of the users.
AI & Brand Differentiation
The ongoing need for marketers to create differentiation and relevance for their brand raised this question: “How would you respond to this statement regarding the differentiation dilemma marketers face: ‘the biggest issue with AI-generated content isn’t accuracy. It’s sameness.’” The Perplexity reply included these nuggets:
- “I’d respond by partly agreeing—but reframing it for marketers: Samenness [Perplexity’s misspelling] is the strategic issue with AI content, but it’s not the only one—and it’s not really a technology problem so much as a marketing one.”
- “You’re right that the deeper problem with AI in marketing isn’t ‘will it hallucinate?’—we can largely manage that. The real threat is that if we use it lazily, it drags our brand voice toward the industry average and makes us invisible. Our job isn’t to stop using AI; it’s to make sure the models are trained, prompted, and edited around our distinctive POV, data, and voice so we scale originality—not sameness.”
Perplexity agrees that sameness is a strategic issue with AI content, but it is putting the onus on marketers to provide it with the right prompts and training to avoid it. It’s odd hearing the GIGO (“garbage in, garbage out”) defense from AI. Oh, and the statement that “we can largely manage that” (i.e., hallucination”) is coming after the “samenness” typo?
AI & Creating Duplicative Content
On a lighter note, our post about the “Infinite Monkey Theorem” being updated to the “Infinite AI Theorum” asked: “Can AI bots be prompted in a way that will lead to an exact replication of a play like Shakespeare’s Hamlet?” Perplexity’s answer included:
- “No, not in any legitimate or allowed way—and in practice, not reliably even if you tried to ‘force it.’”
- “In theory, if someone trained or fine‑tuned a private model directly on Hamlet with no safety filters and then prompted it cleverly, they might get very close to or even exactly reproduce the play.”
- “That’s less ‘prompt magic’ and more ‘I trained a model with the goal of memorizing Hamlet and allowed it to regurgitate it,’ which is exactly the type of misuse current providers try to prevent.”
It is good to know there are some AI guardrails, and it appears Shakespeare’s works are safe from both monkeys and AI bots — at least for now.


