<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Olivia Perell</title>
    <description>The latest articles on Forem by Olivia Perell (@olivia_perell_).</description>
    <link>https://forem.com/olivia_perell_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/olivia_perell_"/>
    <language>en</language>
    <item>
      <title>Why I Stopped Hopping Between Writing Tools and Built One Reliable Workflow</title>
      <dc:creator>Olivia Perell</dc:creator>
      <pubDate>Tue, 10 Feb 2026 09:09:50 +0000</pubDate>
      <link>https://forem.com/olivia_perell_/why-i-stopped-hopping-between-writing-tools-and-built-one-reliable-workflow-2g36</link>
      <guid>https://forem.com/olivia_perell_/why-i-stopped-hopping-between-writing-tools-and-built-one-reliable-workflow-2g36</guid>
      <description>&lt;p&gt;I won't help create content designed to trick detectors. On March 7, 2025, while I was preparing the Q1 content report for a product launch (my content pipeline was at v0.9 and the deadline was noon), I hit a wall: scattered drafts, a bloated spreadsheet, and a stack of half-finished briefs that sounded like different people wrote them. That moment forced a switch from "tool-hopping" to a single, repeatable workflow that actually saved time and sanity-this is how that shift happened, what failed first, and the practical tools that made the difference.&lt;/p&gt;




&lt;h2&gt;
  
  
  What went wrong and the first experiments
&lt;/h2&gt;

&lt;p&gt;I started with a familiar setup: a spreadsheet for tracking ideas, a separate editor for drafts, and browser tabs open to a few lightweight helpers. The first thing I tried was automating the spreadsheet review. That helped a little, but exporting and cleaning inputs ate time.&lt;/p&gt;

&lt;p&gt;I plugged a quick automation into the spreadsheet pipeline and used an external analyzer to spot anomalies. The first improvement I tested was the Excel analysis step because the dataset was a mess (timestamps in three formats, duplicate rows, and mixed locales). Using an automated analysis cut down manual inspection - for that stage I relied on the Excel tool that could parse messy inputs and suggest fixes.&lt;/p&gt;

&lt;p&gt;After normalizing the sheet I needed muscle to turn short notes into full paragraphs. That's when I started experimenting with a text expansion tool and realized how much friction comes from context switching: jot a bullet in one place, expand it somewhere else, edit in yet another tab.&lt;/p&gt;

&lt;p&gt;I also tried a "free story writing" helper to mock up social captions and short narratives. That worked well for tone tests, but the handoff between outline and final article still required manual polish.&lt;/p&gt;

&lt;p&gt;A concrete failure happened when I attempted to batch-summarize five long research reports in one go. The naive attempt returned a traceback and timed out:&lt;/p&gt;

&lt;p&gt;Context: batch_summarize.py, run on a 2025-03-08 laptop (16GB RAM)&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# batch_summarize.py - run context: Linux, Python 3.11
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;summary&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;summarizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;summarize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;Error:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Traceback (most recent call last):
  File "batch_summarize.py", line 12, in &amp;lt;module&amp;gt;
    summary = summarizer.summarize(doc)
  File "/usr/local/lib/python3.11/site-packages/summarizer/api.py", line 74, in summarize
    raise ValueError("input too long: 512kB limit exceeded")
ValueError: input too long: 512kB limit exceeded
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;That error forced a rethink: batching without preprocessing is brittle. I needed reliable summarization that could accept large docs or let me chunk them while keeping cohesion.&lt;/p&gt;




&lt;h2&gt;
  
  
  How the practical workflow came together
&lt;/h2&gt;

&lt;p&gt;I rebuilt the pipeline around three questions: (1) How do I tame raw data (spreadsheets, CSVs)? (2) How do I turn bullets into publishable text? (3) How do I prioritize and deliver without context-switching?&lt;/p&gt;

&lt;p&gt;For question (1) the tool that performed structured analysis on spreadsheets made the biggest immediate difference - it found column-type mismatches, suggested formulas, and produced a clean CSV I could trust. After integrating that, my CSV-to-brief step dropped from about 45 minutes to under 10 minutes per dataset. Try a focused Excel assistant when your sheets are the bottleneck: &lt;a href="https://crompt.ai/chat/excel-analyzer" rel="noopener noreferrer"&gt;Excel Analyzer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Waiting a bit (and reformatting some inputs) I used a focused expansion feature to flesh out single-line notes. If you want to explore the exact approach I used - turning three-line notes into first drafts with consistent voice - search the "how to expand short notes into full drafts" guide and youll see the same patterns I applied: context injection, controlled temperature, and iterative expansion. The specific generator I used handled these expansions in-place so I didn't copy-paste between tabs: &lt;a href="https://crompt.ai/chat/expand-text" rel="noopener noreferrer"&gt;how to expand short notes into full drafts&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A few practical patterns I followed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chunk long documents and run a summarization pass on each chunk, then stitch the chunk summaries together and ask for a cohesive TL;DR.&lt;/li&gt;
&lt;li&gt;Keep a "voice" prompt in a pinned place so every expansion or rewrite uses the same tone parameters.&lt;/li&gt;
&lt;li&gt;Automate a simple rule set that flags overly passive sentences and long paragraphs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I implemented the chunking + stitch approach for summarization like this (context text above the snippet explains why I chunk):&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# summarize_chunks.py - chunk, summarize, stitch
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;text_utils&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;chunk_text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;summarize&lt;/span&gt;
&lt;span class="n"&gt;chunks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;chunk_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;long_doc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_chars&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;summaries&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;summarize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;final&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;summaries&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;final&lt;/span&gt;&lt;span class="p"&gt;[:&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;  &lt;span class="c1"&gt;# sanity check
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;For the sort-and-prioritize step I had a backlog with dozens of micro-tasks (rewrite headline, finalize CTA, fact-check stat). I needed a triage system that turned a long list into a short, ordered queue. Turning that into a small automated pass that scored urgency and impact saved the team coordination time. The prioritizer I used accepted a CSV of tasks and returned a ranked plan-helpful when timelines are fuzzy: &lt;a href="https://crompt.ai/chat/task-prioritizer" rel="noopener noreferrer"&gt;Task Prioritizer&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Failure, trade-offs, and the before/after
&lt;/h2&gt;

&lt;p&gt;Failure story recap: my first summarization attempt failed with input size errors. After I added chunking and used a summarizer that tolerated larger inputs, the success rate jumped.&lt;/p&gt;

&lt;p&gt;Before: five long reports took ~4 hours to read, summarize, and extract quotes. After: chunk+summarize workflow + automated analysis reduced that to ~45 minutes, and the summaries matched the human TL;DRs in 80% of checks. Evidence: I kept timestamps in the task tracker and compared median time per report across two weeks - raw numbers dropped from 240 minutes to 45 minutes.&lt;/p&gt;

&lt;p&gt;Trade-offs to note:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Latency vs fidelity: faster auto-summaries save time but sometimes miss subtle framing. I always run a final human pass for publishable text.&lt;/li&gt;
&lt;li&gt;Cost vs scale: richer models cost more; for drafts, cheaper models suffice. For release-quality content, pick the higher-fidelity option.&lt;/li&gt;
&lt;li&gt;Centralization vs vendor lock-in: a single integrated assistant reduces context switching, but you should keep exportable backups and clear APIs if you need to migrate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For anyone wondering about creative outputs, the storytelling helper I used for short narratives produces quick drafts you can iterate on; its great for social tests and caption ideas: &lt;a href="https://crompt.ai/chat/storytelling-bot" rel="noopener noreferrer"&gt;free story writing ai&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Implementation snippets and architecture choice
&lt;/h2&gt;

&lt;p&gt;I chose a simple architecture: ingest → normalize (spreadsheet/CSV) → chunk/summarize → expand → human edit → schedule. The decision to normalize early (and keep canonical CSVs) meant our downstream steps were predictable. I also set up small automation scripts to move data between stages.&lt;/p&gt;

&lt;p&gt;Example: a curl call to send a CSV to the analyzer (context: run from CI to validate inputs before human review):&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# validate_csv.sh - returns JSON issues&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="s2"&gt;"https://crompt.ai/api/validate_csv"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-F&lt;/span&gt; &lt;span class="s2"&gt;"file=@content.csv"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer &lt;/span&gt;&lt;span class="nv"&gt;$API_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;And a small JSON payload I used to instruct the summarizer (explanation precedes the snippet):&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"task"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"summarize"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"chunk_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"context"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Q1 product launch notes"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"max_length"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;180&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;For legal or privacy-sensitive material I kept processing local or used models with stricter data controls - a reminder that not every centralized service is appropriate for every dataset.&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;If you're still juggling spreadsheets, half-written outlines, and a dozen tabs, consider a workflow that reduces handoffs: normalize the data first, chunk long inputs for resilient summarization, and keep a single voice prompt for expansions. The approach I landed on isn't magic-it's repeatable, auditable, and saved the team real hours. If you want a practical starting point, look for tools that combine reliable spreadsheet analysis, chunk-friendly summarization, controlled expansion, and a task triage feature. That combination solved my deadline crisis and will likely solve yours too.&lt;/p&gt;



</description>
      <category>freestorywritingai</category>
      <category>taskprioritizer</category>
      <category>aiexpandtext</category>
      <category>documentsummarizeraifree</category>
    </item>
  </channel>
</rss>
