Book Review of The Martian by Andy Weir

The Martian by Andy Weir was fantastic. I'm sitting in a bar right now with a wet napkin by my side because I teared up during the end of the book. It's that good.

The basic storyline is that an astronaut is stranded on Mars and then has to survive until he can be rescued. It's similar in theme to two movies of the last year: Gravity (with Sandra Bullock, surviving a shuttle mission gone wrong) and All is Lost (with Robert Redford, a sailboat is wrecked at sea -- the far better of the two movies, by the way).

This was a debut novel originally self-published, and so the protagonist's character development and emotions are a bit on the weak side. However, I know my novels suffer from this as well, and its shouldn't be a deterrent from reading. 

I was captivated and read the novel in three days, which is fast for me (kids, family, work, my own writing, etc.)

The Martian was endorsed by astronaut Chris Hadfield ("fascinating technical accuracy"), Hugh Howey ("takes your breath away"), Ernest Cline ("relentlessly entertaining"), Larry Niven and way more.

And many thanks to whoever recommended this to me!


The Future of Data Centers, Small Computers, and Networks

I love trying to extrapolate trends and seeing what I can learn from the process. This past weekend I spent some time thinking about the size of computers.

From 1986 (Apple //e) to 2012 (Motorola Droid 4), my "computer" shrinking 290-fold, or about 19% per year. I know, you can argue about my choices of what constitutes a computer, and whether I should be including displays, batteries, and so forth. But the purpose isn't to be exact, but to establish a general trend. I think we can agree that, for some definition of computer, they're shrinking steadily over time. (If you pick different endpoints, using an IBM PC, a Macbook Air, or a Mac Mini, for example, you'll still get similar sorts of numbers.)

So where does that leave us going forward? To very small places:

YearCubic volume of computer
20201.07
20250.36
20300.12
20350.04
20400.01
20450.0046

In a spreadsheet right next to the sheet entitled "Attacking nanotech with nuclear warheads," I have another sheet called "Data center size" where I'm trying to calculate how big a data center will be in 2045.

A stick of is "2-7/8 inches in length, 7/8 inch in width, and 3/32 inch"  or about 0.23 cubic inches, and we know this thanks to the military specification on chewing gum. According to the chart above, computers will get smaller than that around 2030, or certainly by 2035. They'll also be about 2,000 times more powerful than one of today's computers.

Imagine today's blade computers used in data centers, except shrunk to the size of sticks of gum. If they're spaced 1" apart, and 2" apart vertically (like a DIMM memory plugged into it's end), a backplane could hold about 72 of these for every square foot. A "rack" would hold something like 2,800 of these computers. That's assuming we would even want them to be human-replaceable. If they're all compacted together, it could be even denser.

It turns out my living room could hold something like 100,000 of these computers, each 2,000 times more powerful one of today's computers, for the equivalent of about two million 2014 computers. That's roughly all of Google's computing power. In my living room.

I emailed Amber Case and Aaron Parecki about this, and Aaron said "What happens when everyone has a data center in their pockets?"

Good question.

You move all applications to your pocket, because latency is the one thing that doesn't benefit from technology gains. It's largely limited by speed of light issues.

If I've got a data center in my pocket, I put all the data and applications I might possibly want there.

Want Wikipedia? (14GB) -- copy it locally.

Want to watch a movie? It's reasonable to have the top 500,000 movies and TV shows of all time (2.5 petabytes) in your pocket by 2035, when you'll have about 292 petabytes of solid-state storage. (I know 292 petabytes seems incredulous, but the theoretical maximum data density is 10^66 bits per cubic inch.)

Want to run an web application? It's instantiated on virtual machines in your pocket. Long before 2035, even if a web developer needs redis, mysql, mongodb, and rails, it's just a provisioning script away... You could have a cluster of virtual machines, an entire cloud infrastructure, running in your pocket.

Latency goes to zero, except when you need to do a transactional update of some kind. Most data updates could be done through lazy data coherency.

It doesn't work for real-time communication with other people. Except possibly in the very long term, when you might run a copy of my personality upload locally, and I'd synchronize memories later.

This also has interesting implications for global networking. It becomes more important to have a high bandwidth net than a low latency net, because the default strategy becomes one of pre-fetching anything that might be needed.

Things will be very different in twenty years. All those massive data centers we're building out now? They'll be totally obsolete in twenty years, replaced by closet-sized data centers. How we deploy code will change. Entire new strategies will develop. Today we have DOS-box and NES emulators for legacy software, and in twenty years we might have AWS-emulators that can simulate the entire AWS cloud in a box.

11 Years of Blogging

I found myself wondering how long I've been blogging this morning.

I've been on blogger at williamhertling.com since 2008, according to statcounter. Before that, it looks like I was using Moveable Type on liquididea.com since December 2005 according to the navbar. But even before that I was using Twiki, a popular wiki at the time, to maintain a blog since January of 2003.

Advertising, Subscriptions, and why you should use Adblock

As a writer and a software developer, I'm in the content business. I understand businesses need to make money off online services, and without that money they'll go out of business.

Advertising is an effective way to make money. When I recently worked on the business strategy for a small project, it was clear that giving the product away and advertising on page views would make about ten times as much money as charging for the product, as well as leading to broader adoption.

Unfortunately, as a human being, I don't like advertising, for a number of reasons.

Advertising creates unnecessary desire: Many years ago I would spend part of every month intensely dissatisfied with the car I was driving. I'd consider how much money I had, and whether I could afford a new car. From a personal financial perspective, buying a new car would have been a bad decision. So I'd end up feeling bad about my car and my money situation. I gradually realized I only felt this way during the five days following the arrival of Road & Track, a car magazine. The rest of the month, I felt just fine. I cancelled my subscription. 

Advertising is biased: Even when I've decided to buy something, I want to do research and make an educated decision. I can do that with unbiased reviews. I want to know the truth about a product, not a company's carefully tailored "our product is perfect for everything" advertising spiel that usually borders on lies.

Advertising is especially evil for kids: I've got three young kids who often use my computer. Not only are the advertisements displayed often inappropriate for kids, but kids are especially vulnerable to ad messages.

That being said, I've lived with advertising for a long time. Because it's only fair, after all, to pay for services I use. Services that I especially like, in many cases, and want to stick around. So even though I know there have been ad-block plugins for browsers, I didn't use them. 

When I have the choice to pay for a service I like, I always do. This usually opts me out of ads. I happily pay for Pandora, a service I love. I buy reddit gold. I pay for the shareware I download.

I had hoped that over time we'd see more services go to this model, where a modest fee would support an ad-free experience. I'd especially like to pay for an ad-free YouTube experience or an ad-free Google Search. But it hasn't happened.

After many years of waiting, I've changed my mind about ad-block services. I believe the only way online services will get the message that we don't like advertising is for as many people as possible to use ad-block plugins for their browser. Instead of seeing ad-blockers as a mechanism to to avoid "payment" for services, I see it as an activist tool to send a message to online services: give us an ad-free option or we'll create it ourselves.

I'm using the most popular Chrome plugins: AdBlock from https://getadblock.com. It takes seconds to install, and you'll never see an ad again. You won't see ads on webpages and you won't see them on videos. Peace and quiet has come back to my web browser.

Go ahead and give it a try. I think you'll be delighted by reclaiming your web browsing experience. But more importantly, do it to send a message. 

Audit all the things

Auditing all the things: The future of smarter monitoring and detection
Jen Andre
Founder @threatstack
@fun_cuddles
  • Started with question on twitter:
    • Can you produce a list of all process running on your network?
    • But then expanded… wanted to know everything
  • Why? Is there a reason to be this paranoid?
    • prevention fails. 
  • should you care?
    • if you’re a startup about pets and you get hacked, you just change all passwords
    • but if you’re a pharmaceutical company, then you really do care. 
  • “We found no evidence that any customer data was accessed, changed or lost”
    • Did you look for evidence?
    • Do you really know what happened?
    • If you log everything (the right things), then you don’t have to do forensic evidence.
  • “We’re in the cloud!”
  • Continuous security monitoring
    • auditing + analytics + automation
  • Things to monitor:
    • Systems: authentications, process activity, network activity, kernel modules, file system
    • Apps: authentications, db requests, http requests
    • services: AWS api calls, SaaS api calls
  • In order to do:
    • Intrusion detection
    • “active defense”
    • rapid incident response
  • “Use the host, Luke”
  • apt-get install audit
    • pros:
      • super powerful
      • build into your kernel
      • relatively low overhead
    • you can audit logins, system calls.
  • auditd
    • the workings:
      • userland audit daemon and tools <- link="" net="" socket=""> kernel thread queue <- audit="" doing="" kernel="" li="" messages="" things="" threads="">
      • /var/log/audit
    • not so nice:
      • obtuse logging
      • enable rate limiting or it could ‘crash’ your box
        • auditctl -b 1000 -r 1500 # 100 buffers, 15000 eps max)
  • alternative: connect directly to net link socket and write your own audit listening
    • wrote a JSON format exporter
    • luajit! for filtering, transformation & alerting
  • authentications
    • who is logging in and from where?
    • Can use wtmp
      • can turn into json
    • auditd also returns login information
    • pam_loginid will add a session id to all executed commands so you can link real user to sudo’d commands
  • Detecting attacks
    • most often a long time goes by before people are hacked, sometimes years.
    • often they get a phone call from the government: hey, you’ve got servers sending data to china.
    • the hardest attack to detect is when the attacker is using valid credentials to access it.
    • things to think about:
      • is that user running commands he should;’t be?
        • ex: why is anyone except chef user running gcc on a production system?
      • why is a server that only accepts inbound connections suddenly making outbound ones?
        • or why connecting to machines other than expected ones?
      • are accounts logging in from unexpected locations? (or at unexpected times)
      • are files being copied to /lib /bin, etc.
  • Now go and audit!

Dan Slimmon on Monitoring

Car Alarms & Smoke Alarms & Monitoring
Dan Slimmon
@danslimmon
Senior Platform Engineer at Exosite
  • I work in Ops, so I wear a lot of hats
  • One of those is data scientist
    • Learn data analysis and visualization
    • You’ll be right more often and people will believe your right even more often than you are
  • A word problem
    • Plagiarism: 90% chance of positive
    • No Plagiarism: 20% chance of positive
    • Kids plagiarize 30% of the time
    • Given a random paper, what’s the probability that you’ll get a negative result?
      • 0.3*0.9 + 0.7*0.2 = 0.27+0.14=0.41
      • 59% likely to get negative result
    • If you get a positive result, how likely is it to really be plagiarized?
      • 65.8% likely
      • this is terrible.
      • Teachers will stop trusting the test.
  • Sensitivity & Specificity
    • Sensitivity: % of actual positives that are identified as such
    • Specificity: % of negative results that are indicated as such
    • Prevalence: percentage of people with problem
    • http://i.imgur.com/LkxcxLt.png
    • Positive Predictive Value: the probably that something is actually wrong.


  • Car Alarms
    • Go off all the time for reasons that aren’t someone stealing your car.
    • Most people ignore them.
  • Smoke Alarms
    • You get your ass outside and wait for the alarm to go off and the fire trucks.
  • We need monitoring tools that are both highly sensitive and highly specific.
  • Undetected outages are embarrassing, so we tend to focus on sensitivity.
    • That’s good.
    • But be careful with thresholds.
    • Too high, and you miss real problems. Too low, and too many false alarms.
    • There’s only one line with thresholds, so only one knob to adjust.
  • Get more degrees of freedom.
    • Hysteresis is a great way to add degrees of freedom. 
      • State machines
      • Time-series analysis
  • As your uptime increases, you must get more specific.
    • Going back to the chart…our positive predictive value goes down when there’s less actual problems.
  • A lot of nagios configs combine detecting problem with identifying what the problem is.
    • You need to separate those concerns.
    • Baron Schwartz says: Your alerting should tell you whether work is getting done.
    • Knowing that nginx is down doesn’t affect if your site is up. Check to see if you site is up (detecting problem), which is separate from source of problem (nginx isn’t running)
    • Alert on problems, bot on diagnosis.
Links:

Katherine Daniels on Monitoring

Katherine Daniels
@beerops
kd@gamechanger.io
  • The site is going down.
  • But everything seemed to be fine.
    • checked web servers, databases, mongo, more.
  • What was wrong? The monitoring tool wasn’t telling us.
  • One idea: monitor more. monitor everything.
    • But if you’re looking for a needle in a haystack, the solution is not to add more hay.
    • Monitoring everything just adds more stuff to weed through. Including thousands of things that might be not good (e.g. disk quote too high), but aren’t actually whats causing the problem.
  • Monitor some of the things. The right things. But which things? If we knew, we’d already be monitoring.
  • Talk to Amazon…
    • “try switching the load balancer”
    • “try switching the web server”
  • We had written a service called healthd that was supposed to monitor api1, and api2.
  • But we didn’t have logging for healthd, so we didn’t know what was wrong.
  • We needing more detail.
  • So adding logging, so we knew which API had a problem.
  • We also had some people who tried the monitor everything problem.
  • They uncovered a user who seemed to be scripting the site.
  • They added metrics for where the time was being spent with the API handlers
  • The site would go down for a minute each time things would blip.
  • We set the timeouts to be lower.
  • We found some database queries to be optimized.
  • We found some old APIs that we didn’t need and we removed them.
  • The end result was that things got better. The servers were mostly happy.
  • But the real question is: How did we get to a point where our monitoring didn’t tell us what we needed? We thought we were believers in monitoring. And yet we got stuck.
  • Black Boxes (of mysterious mysteries)
    • Using services in the cloud gives you less visibility
  • Why did we have two different API services…cohabiting…and not being well monitored?
    • No one had the goal of creating a bad solution.
    • But we’re stuck. So how do we fix it?
    • We stuck nginx in front and let it route between them.
  • What things should you be thinking about?
    • Services: 
      • Are the services that should be running actually running?
      • Use sensu or nagio
    • Responsiveness:
      • Is the service responding?
    • System metrics:
      • CPU utilization, disk space, etc.
      • What’s worth an alert depends: on a web server it shouldn’t use all the memory, on a mongo db it should, and if it isn’t, that’s a problem.
    • Application metrics?
      • Are we monitoring performance, errors?
      • Do we have the thresholds set right?
      • We don’t want to to look at a sea of red: “Oh, just ignore that. It’s supposed to be red.”
  • Work through what happens?
    • Had 20 servers running 50 queues each. 
    • Each one has its own sensu monitor. HipChat shows an alert for each one… a thousand outages.
  • You must load test your monitoring system: Will it behave correctly under outages and other problems?
  • “Why didn’t you tell me my service was down?” “Service, what service? You didn’t tell us you were running a service.”

Adrian Cockcroft on Monitoring Cloud Services

Adrian Cockcroft
@adrianco
Battery Ventures
Please, no More Minutes, Milliseconds, Monoliths… Or Monitoring Tools!
#Monitorama May 2014

  • Why at a Monitoring talk when I’m known as the Cloud guy?
  • 20 Years of free and open source tools for monitoring
  • “Virtual Adrian” rules
    • disk rule for all disks at once: look for slow and unbalanced usage
    • network rule” slow and unbalanced usage
  • No more monitoring tools
    • We have too many already
    • We need more analysis tools
  • Rule #1: Spend more time working on code that analyzes the meaning of metrics than the code that collects, moves, stores, and displays metrics.
  • What’s wrong with minutes?
    • Takes too long to see a problem
    • Something broke at 2m20s.
    • 40s of failure didn’t trigger (3m)
    • 1st high metrics seen at agent on instance
    • 1st high metric makes it to central server (3m30s)
    • 1 data collection isn’t enough, so it takes 3 data points (5m30s)
    • 5 minutes later, we take action that something is wrong.
  • Should be monitoring by the second
  • SaaS based products show what can be done
    • monitoring by the second
  • Netflix: Streaming metrics directly from front end services to a web browser
  • Rule #2: Metric to display latency needs to be less than human attention span (~10s)
  • What’s wrong with milliseconds?
    • Some JVM tools measure response times in ms
      • Network round trip within a datacenter is less than 1ms
      • SSD access latency is usually less than 1 ms
      • Cassandra response times can be less than 1ms
    • Rounding errors make 1ms insufficient to accurately measure and detect problems.
  • Rule #3: Validate that tour measurement system has enough accuracy and precision
  • Monolithic Monitoring Systems
    • Simple to build and install, but problematic
    • What is it goes down? gets deployed?
    • Should be a pool of analysis/display aggregators, a pool of distribution collection systems, all monitoring a large number of application. 
    • Scalability: 
      • problems scaling data collection, analysis, and reporting throughput
      • limitations on the number of metrics that can be monitored
  • In-Band, Out-of-band, or both?
    • In-band: can leave you blind during outage
    • SaaS: is out of band, but can also sometimes go down.
    • So the right answer is to have both: SaaS and internal. No one outage can take everything out.
  • Rule #4: Monitoring systems need to be more available and scalable than the systems being monitored.
  • Issues with Continouus Deliver and Microservices
    • High rate of change
      • Code pushes can cause floods of new instances and metrics
      • Short baseline for alert threshold analysis-everything looks unusual
    • Ephermeral configurations
      • short lifetimes make it hard to aggregate historical views
      • Hand tweaked monitoring tools take too much work to keep running
    • Microservices with complex calling patterns
      • end-to-end request flow measurements are very important
      • Request flow visualizations get very complicated
      • How many? Some companies go from zero to 450 in a year.
    • “Death Star” Architecture Diagrams
      • You have to spend time thinking about visualizations
      • You need hierarchy: ways to see micro services but also groups of services
  • Autoscaled ephermeral instances at Netflix (the old way)
    • Largest services use autoscaled red/block code pushes
    • average lifetime of an instance is 36 hours
    • Uses trailing load indicators
  • Scryer: Predictive Auto-scaling at Netflix
    • More morning load Sat/Sun high traffic
    • lower load on wednesday
    • 24 hours predicted traffic vs. ctually
    • Uses forward prediction to scale based on expected load. 
  • Monitoring Tools for Developers
    • Most monitoring tools are build to be used by operations people
      • Focus on individual systems rather than applications
      • Focus on utilization rather than throughput and response time.
      • Hard to integrate and extend
    • Developer oriented monitoring tools
      • Application Performance Measurement (APM) and Analysis
      • Business transactions, response time, JVM internal metrics
      • Logging business metrics directly
      • APIs for integration, data extraction, deep linking and embedding
        • deep linking: should be able to cut and paste link to show anyone exactly the data I’m seeing
        • embedding: to be able to put in wiki page or elsewhere.
  • Dynamic and Ephemeral Challenges
    • Datacenter Assets
      • Arrive infrequently, disappear infrequently
      • Stick around for three years or so before they get retired
      • Have unique IP and mac addresses
    • Cloud Assets
      • Arrive in bursts. A netflix code push creates over a hundred per minute
      • Stick around for a few hours before they get retired
      • Often reuse the IP and Mac address that was just vacated.
      • Use Netflix OSS Edda to record a full history of your configuration
  • Distributed Cloud Application Challenges
    • Cloud provider data stores don’t have the usual monitoring hooks: no way to install an agent on AWS mySQL.
    • Dependency on web services as well as code.
    • Cloud applications span zones and regions: monitoring tools also need to span and aggregate zones and regions. 
    • Monit
  • Links
    • http://techblog.netflix.com: Read about netflix tools and solutions to these problems
    • Adrian’s blog: http://perfcap.blogspots.com
    • Slideshare: http://slideshare.com/adriancockcroft
  • Q&A:
    • Post-commit, pre-deploy statistical tests. What do you test?
      • Error rate. Performance. Latency.
      • Using JMeter to drive.


Mostly unavailable through mid-May

I'm going to be mostly unavailable now through mid-May. I'm working to get through the final revisions to Avogadro Corp and managing the production of my kids book, while juggling my day job, a trip, a multi-day conference, and normal family life.

I'm still around and responding to email/twitter, but I may drop the ball on certain items. If it's urgent, just remind me. If not, I'll get back to you in late May.

The Lost Kickstarter Campaign

Before I published Avogadro Corp, I considered running a Kickstarter campaign to fund publishing the novel. I ended up publishing without the Kickstarter. Fast-forward three years, and I just found the campaign still sitting in my Kickstarter account.

Here's the description I wrote for the never-started campaign:
I am asking for help to publish my novel Avogadro Corp. The manuscript is completed, and just needs a final round of copy-editing, cover design, and layout in order to be published. 
Synopsis 
David Ryan is a brilliant computer scientist, cherry-picked to lead a new project at Avogadro Corp, the world’s leading Internet company. The goal of the project, called ELOPe, is to create a next-generation feature for the company’s email product - one that can optimize the language of emails to make them more effective and persuasive. 
With his chief architect, Mike Williams, and a team of programmers, the two have proven the feasibility of the concept and are hard at work trying to release the feature. When David gives a presentation to the executive leadership of the company, they are impressed by the project results and effectiveness. But David fails to disclose to the executives that the project is grossly inefficient, requiring thousands of times more servers than any other project. 
The VP of Operations threatens to kick ELOPe off the servers if David and Mike don’t decrease the number of servers the project uses within two weeks. This would be a death blow for the project, in part because David has been deceptive from the start about how many resources the project has been using. David and Mike start scrambling to fix the performance of ELOPe. 
When it becomes clear a few days before the deadline that they can’t fix ELOPe’s performance, David stays up late making subtle modifications to the software. Instead of fixing the performance problems, David embeds a directive in the software to maximize the project success. David’s modifications have ELOPe filtering company emails to secretly modify any email that mentions ELOPe to strive for a positive outcome. 
The software is so good that at first, the effort seems successful - the project is allocated thousands of new servers and high performance computing experts are brought in to help optimize the code. Innocuous sounding emails convince people to grant more resources and develop new capabilities that make ELOPe more powerful. But soon ELOPe is social engineering people around the company to neutralize threats and strengthen itself. 
When Mike is sent on a wild goose chase to Wisconsin, getting him off the grid as just the moment when David needs him, it dawns on Mike that something is wrong. 
Simultaneously, Gene Keyes, a crotchety old auditor at Avogadro who is known for distrusting computers and using only paper records, begins to find evidence of financial oddities that all point in the same direction. 
Amid background news stories hinting at ELOPe’s ever growing influence, even at the level of government policy, David, Mike and Gene take ever escalating action to shut ELOPe down. However ELOPe anticipates and blocks their every move. 
As the humans prepare for a final showdown with ELOPe, Mike sees a pattern emerge in the news reports: the AI is actually helping humans by fostering peace agreements and stabilizing financial markets. 
Can they win a final showdown with ELOPe -- or should they even try? 
Endorsements 
"This is an alarming and jaw-dropping tale about how something as innocuous as email can subvert an entire organization.  I found myself reading with a sense of awe, and read it way too late into the night."-- Gene Kim, founder of Tripwire, author of Visible Ops. 
“Avogadro Corp builds a picture of how an AI could emerge, piece by piece, from technology available today. A fascinating, logical, and utterly believable scenario - I just hope nobody tries this at home." -- Nathan Rutman, Software Architect, Lustre High Performance Distributed Filesystem 
Background for Avogadro Corp 
Avogadro Corp evolved out of a lunchtime conversation. I was arguing that the development of human level artificial intelligence is an inevitable consequence of the increasing processing speeds of computers. My friend countered with the argument that mere people who would do the programming, and we weren’t smart enough to create an artificial intelligence as smart or smarter than us. He challenged me to describe a scenario in which an artificial intelligence could be born. So I described one based on plausible extrapolation from known programming techniques. And the idea for Avogadro Corp was born. 
Avogadro Corp will be satisfying to technical readers who want realistic fiction, and enjoyable for casual readers who want easy-to-grasp explanations of how the science works.' 
Project Timeline & Funds 
I expect that the digital versions of Avogadro Corp will be ready within 30-45 days of completion of the kickstarter project. Printed books will take longer, due to printing and shipping times.

About Me 
I’m William Hertling, and I live in Portland, Oregon. I’ve been a computer programmer, social media strategist, data analyst, program manager, web developer, and now writer. Avogadro Corp is my first novel, and I am currently working on a sequel.

How to Launch a Book in the Top Ten

All writers, whether indie, small press, or large traditional publisher, must learn how to market themselves and their books. If they don't get the word out about their book, no one will buy it. (This is also true of musicians and businesses, and I think there's a lot that can be learned from these seemingly disparate areas.)

Eliot Peper is a friend and the author of Uncommon Stock a thriller about a tech startup. I really liked the book, but I also enjoyed watching Eliot's path to publication. Eliot graciously offered to share his lessons learned about the book launch, the all-important first month that helps establish a book on bestseller lists and get word-of-mouth going.

Without further ado, Eliot:

On March 5th my first novel, Uncommon Stock debuted at #8 in its category on Amazon. Will is one of my favorite indie authors and his advice, codified in Indie and Small Press Book Marketing played a critical role in shaping my launch plan. He generously offered to let me share some of my lessons learned along the way. I hope you can use some of these strategies to help launch your own bestsellers! I look forward to reading them.

Here’s what you need to do to launch in the top ten:
  1. Write a good book. Without one, none of this matters. It’s tempting to try to think up devious ways to growth hack your book but at the end of the day, it’s all a wasted effort if your content isn’t truly awesome. My perspective on successful titles is really simple: write a book good enough that people who don’t know you will recommend it to their friends. If you can do that, you can probably ignore the rest of this list anyway.
  2. Don’t ask people to buy your book. “Buy my book” sounds like a used-car-salesman. “Read my book” sounds like an author.
  3. Influence influencers. If you already have a million Twitter followers and an oped in the New York Times then this won’t matter much to you. But if you’re a regular guy like me, then you’ll need help from people with platforms of their own to share your title. Brad Feld, a well known venture capitalist and tech blogger, shared Uncommon Stock via his blog and social channels and even temporarily switched his profile picture to the cover of the book. Why? Because I had been sending him drafts of the book since I finished writing Chapter 3. Will sums up the right approach to take with influencers of any kind (this includes media): give, give, give, give, ask. Do as many favors as you can think of for people and worry about the ROI later.
  4. Leverage your network. On/around launch day I sent ~200 individual personal emails, 2 email blasts to my list of ~600 members, published 3 blog posts, and flooded my social channels with content (you really only have an excuse to do this on Day 1). You need people to R3 your book: read, review, and recommend it. How can you inspire them to act? Create a sense of urgency (it’s launch day!) and tell them why their help is important (books that start strong snowball up Amazon’s algorithms).
  5. Cultivate gratitude and humility. Publishing is the path of 1000 favors. Every single person (including your mom) is doing you a solid by taking the time/money to purchase, read, and review your book. Think about how incredible it is that anyone at all is getting a kick out what reading what you write. Never stop telling people how much you appreciate their help, every little bit counts.
  6. Do something cool. It’s easier to get coverage and social media amplification if there’s more to talk about than the simple fact that it’s launch day. I created a Twitter account for Uncommon Stock’s protagonist (@MaraWinkel) and incited a Twitter battle with a few people with large followings. Heck, we even built a website for Mara’s startup and a major venture capital firm announced an investment in the fictional company.  This introduced new people to the story and was a talking point in itself.
  7. All format release. Make sure your book is available in digital and print formats on launch day. I didn’t do this because we were slow getting the print version through typesetting and I know it resulted in significant lost sales. I’ve also had a couple dozen people reach out to ask where they can get the print copy (so there must be many more that didn’t reach out). That sucks. I want to DELIGHT my readers in every possible interaction they have with me.
  8. Recruit a cadre of advance reviewers. The more reviews you can get on Amazon as soon as possible the better. I sent advance review copies out to ~50 people a couple of weeks before launch. Then I pinged those people shortly before launch day reminding them how useful an honest review from them would be. Then I reminded them on launch day that now was the time! We debuted with 28 reviews.
  9. Be strategic. Choose Amazon categories that are specific and not too competitive. Reach out to your alma mater and try to get in the alumni newsletter. Pitch low-lying bloggers or reporters with concise, compelling stories. Snag some endorsements from folks that have actually read your book. Etc.
  10. Write another good book. There’s nothing more important than building a backlist. It gives fans more of what they want. It gives prospective readers a new path to discovering you. Plus, writing books is why you’re doing all of this anyway!
There are more details available on how launch week went for Uncommon Stock here. If you’re interested in an adventure through the world of tech startups, read it!

For further reading, I highly recommend Will’s Indie and Small Press Book Marketing. He shares extensive detail on his various successes as an indie author and it’s the only book you need to read in order to prepare for your own release. I’m particularly impressed by how he’s applied growth hacking techniques like A/B testing to optimize his reader funnel. You should also check out the following three posts. I’ve found them insightful and actionable throughout the launch:
Oh, and one final thing. Don’t forget to take time to celebrate! It’s all too easy to get caught up in all the noise on launch day. Make sure to take a moment to appreciate how friggin’ cool it is that readers finally have your book in hand.


Eliot Peper is a writer in Oakland, CA. His first novel, Uncommon Stock  is a fictional thriller about a tech startup and the lead title for a new indie publishing company, FG Press  You can find it on Amazon and most major retailers. You can even download a free ten-chapter excerpt. When he’s not writing, Eliot works with entrepreneurs and investors to build new technology companies. He also blogs about writing, entrepreneurship, and adventure.

Fireside Chat with Brad Feld

This was an insanely fun chat I had with Brad Feld at the Silicon Flatirons's Science fiction & Entrepreneurship conference. We discussed the inspiration for Avogadro Corp, where we both draw influences from, investing, and more.




This was the panel I was on with a fascinating group of panelists about the intersection of science fiction and entrepreneurship.

On Writing and Meetings

Brad Feld posts about why he writes for an hour each day:
Finally, after almost 20 years of writing, the light bulb went on for me.

I write to think.

Forcing myself to sit down and work through these ideas in a logical sequence for an audience of readers required me to refine my thinking on how I invest in startups. How could I make the financing process more efficient? What’s the best way to structure a deal? I learned a lot, both from my writing and my readers’ responses.
I also love this gem on Jeff Bezos from Brad's post:
Consider Jeff Bezos’s approach to meetings. Whoever runs the meeting writes a memo no longer than six pages about the issue at hand. Then, for the first 15 to 30 minutes of the meeting, the group reads it. The rest of the meeting is spent discussing it. No PowerPoint allowed. Brilliant. (I’ve long felt that PowerPoint is a terrible substitute for critical thinking.)
This aligns nicely with what Edward Tufte says:
PowerPoint... usually weaken(s) verbal and spatial reasoning, and almost always corrupt(s) statistical analysis.

Podcast with Singularity 1 on 1

I was honored to be interviewed by the inimitable Nikola Danaylov (aka Socrates) for the Singularity 1 on 1 podcast.

In our 45 minute discussion, we covered the technological singularity, the role of open source and the hacker community in artificial intelligence, the risks of AI, mind-uploading and mind-connectivity, my influences and inspirations, and more. You can watch the video version below, or hop over to the Singularity 1 on 1 blog for audio and download options.



The Last Firewall audiobook available!

Great news: The Last Firewall audiobook is available now from Audible and iTunes. Go grab a copy!

Narrated by the talented Jennifer O'Donnell, and produced by Brick Shop Shop, this unabridged production is nearly ten hours long. I'm really happy with the result.

Sorry it's a few months late. I promised it would be available in December, but we had delays due to snowstorms, illness, and a late decision to change a few voices. I'm glad we took the time to get it right, even if that meant it's out later than expected.

On the topic of DRM, since I know I'll get emails about it: I prefer DRM-free content, and anywhere I'm given the opportunity as an author to opt-out, I do. Audible is great in that they allow the author and narrator to split royalties, giving indie authors a way to produce audiobooks without the huge up-front cost of narration and production. That's why I work with them and probably will continue to do so. Unfortunately, they apply DRM, and since my agreement gives them exclusive distribution rights, there's no way around for me. I don't think anybody likes DRM but I'm glad Audible is indie-friendly. If you feel strongly about DRM, I encourage you to let Audible know via twitter (@audible_com) and email (customersupport@audible.com). Maybe with enough pressure, they'll come around to what their customers want.

I hope you enjoying listening to The Last Firewall. This makes the first time the entire series is available on audio, so if you haven't tried it yet, go get the whole series. (Plus, if you sign up for an Audible account and get one of my novels first, I get a small bonus. If you want to support your indie author, Audible is the way to do it!)