Monthly Archives: January 2013

Tip #205: Yay for lazy writers!

After years of working in the mainstream tech press, most recently at CNET, I now find myself removed from the grind of traditional journalism, working at Evernote on the platform team. I’m still writing about startups, in a column called Opportunity Notes, but since my goal with this column (and my Evernote job overall) is to actually and tangibly help entrepreneurs, and not just generate pageviews for a media company, my perspective on journalism is different. As a writer, I can relax.

Except, no, not really. I’ve been covering technology for over 20 years, and I have old-fashioned standards that transcend the company I’m working for. When I write about a product or a business, even if it’s for our corporate blog, I won’t write what I don’t believe or understand, and if the story can be made better by actually talking to someone involved in the product I’m interested in, then by God I’m going to make a call. I worked that way at CNET, at Red Herring and Byte and InfoWorld before that, and I work that way now.

You’d think that’s what all journalists and bloggers do. Especially in such a competitive media environment. But they don’t. Not anymore. The drive to be first on a media company-run site means that some writers post some stories without doing journalism. I know this because I am now advising entrepreneurs on how to work with the media, and more than once, when I have given the standard advice — form a relationship, craft your pitch, be prepared to answer questions — the response I’ve gotten has been an incredulous look and a question like, “Shouldn’t I just write the story for them?”

“Oh no,” I say. “Writers hate that.”

But unfortunately, some (not all, but enough), do not. Entrepreneurs are telling me that they are being asked, by writers, to send them more pre-digested stories.

I’m getting this intelligence from another angle, too: I find companies to cover, often, by reading about them in other sources. To prepare my own stories, I call the entrepreneurs running these companies. In too many cases (two in the last two weeks), these entrepreneurs have told me that I’m the first writer who has actually contacted them before writing.

And I’m not even working for a news site anymore.

Cue the indignation. Feels good.

But let’s move beyond that. Because this is actually a great thing for you PR people!

Now all you have to do to get your story out is write it yourself and plant it in the hands of the right writer. The PR tip for today is this: Learn how to write the story you want to read about your company or product. Basically, that means writing a press release that sounds like a news story. There’s a fighting chance that that is exactly the story that will run — and least the first story.

And hey, if you’re working with a startup or anyone doing a new technology, drop me a line, too. Just a line. Not the whole story. Save that for the poor schlub who lives by the pageview, and has to churn out six stories a day.

102 Comments

Filed under Reporting

Tip #204: It’s a press release, not a graduate thesis

While there’s something perversely beautiful about a press release that’s aimed way over the heads of the reporters who are likely to get it, please remember that the generally-accepted protocol is to at least hint at what you’re talking about in plain English, so the clueless journo who receives it can figure it out if he or she knows anyone who possesses the knowledge to decipher it. Then it can be forwarded. Opaque releases get dumped.

Happy Holidays! Thought I’d update you on LexisNexis Big Data as we roll out new use cases in the upcoming year!

HPCC and Hadoop are both open source projects released under an Apache 2.0 license, are free to use, with both leveraging commodity hardware and local storage interconnected through IP networks. Both allow for parallel data processing and/or querying across architecture. While this doesn’t necessarily mean that certain HPCC operations don’t use a scatter and gather model (equivalent to Map and Reduce), but HPCC was designed under a different paradigm and provides a comprehensive and consistent high-level and concise declarative dataflow oriented programming model.

One limitation of the strict MapReduce model is that internode communication is left to the Shuffle phase. This makes iterative algorithms that require frequent internode data exchange hard to code and slow to execute (as they need to go through multiple phases of Map, Shuffle and Reduce, ea representing a barrier operation that forces serialization of the long tails of execution). HPCC provides for direct inter-node communication at all times and is leveraged by many of the high level ECL primitives.

Another disadvantage for Hadoop is the use of Java for the entire platform, including the HDFS distributed filesystem — adding overhead from the JVM; in contrast, HPCC and ECL are compiled into C++, which executes natively on top of the OS. This leads to more predictable latencies and overall faster execution — we have seen anywhere between 3 & 10 X faster execution on HPCC when compared to Hadoop on the exact same hardware.

Would love to explain more — any chance to set up a meeting or call on this?

Best,

[Professor Incomprehensible]

When I was a tech magazine editor, my general rule was to make 10% of the stories in each issue over the head of the majority of the audience. I wanted to give readers something to shoot for, and to show them what was beyond the horizons of their knowledge.

But I do not think this is a good guideline for press releases.

Hat tip: Pat Houston.

1 Comment

Filed under Compassion, Email