Categories
Tech

Hype isn’t a use case

A few months ago, a recruiter sent me a LinkedIn message with a link to a recruitment video.  I’ve been seeing more of these lately, but this one was particularly impressive at how quickly it turned me off from the company.

It described their idea of a tech utopia, a place where developers could use whatever technology they desired.

“Read a blog post about something new in the morning. Deploy it to prod in the afternoon.” said a developer vibrating from over-caffeination.

“Pick the tools and languages you want. Every team is using something different.”

“We’re on the cutting edge, innovating like crazy, using tech you won’t see anywhere else.”

Everything about the culture they described seemed terrible.  Sure, the people looked happy and engaged, but what the video communicated to me was “This is a portal into a chaotic hellscape of pain and suffering. God help their Ops team.”

These were kids playing with toys: not developers. Look closely, and you’ll see one or two of these folks in most dev groups. They pick their tools based on the latest blog post they’ve read, chasing technologies like a brain-damaged squirrel.

They’re either not in the on-call rotation or are OK with not having a life outside of work. The things they build are always fragile or in some way broken.

It’s a new package manager!

Corralled by a good manager, these guys (and they are always guys) can be solid members of the team who do, in fact, drive innovation and get people thinking about new ways of doing things. Left to their own devices, though, as they often are, and pretty soon the building will be on fire. Granted, it may be a spectacular fire.

Sometimes, this shiny-penny chasing is actively encouraged (as at the business in the recruitment video). These are places where leadership and staff have confused tooling innovation with business innovation. They have a culture where no one ever asks “Will rewriting our entire app in Haskell give us a competitive advantage?” (The answer to this is always no.)

Nor do they ask:

  • Who else is using this?
  • Where would we get support?
  • Is it well documented?
  • Does this actually do what we think it will do?

And so on.

I’m not arguing that people shouldn’t get excited about new technologies, just that there needs to be some prudence when picking tools.  It’s better to innovate in the business, in the product and processes, not the tooling.

For the business to succeed (and for you to have a job long-term), it’s better to use proven technologies that have had time to bake, develop community support, and for the crazy early adopters to figure out production problems. “Well-documented and supported” is way more important than shiny.

So is “right”.  Established tools with a lot of hype that don’t fit the problem being solved are just as dangerous as the new-born ones.

If you’re on a team with or managing some tinkerers, ask questions and help guide them towards good decisions. Make them think through scenarios like “I just read about MongoDB and it’s awesome. We should use Mongo!” and arrive at “All of our data is relational. We should NOT use Mongo.”

Help them think like a carpenter. Using an old hammer doesn’t keep you from building an awesome house. In fact, having tools you know and can trust will let you take risks elsewhere and push boundaries you might not have otherwise been able to if all your tools are in 15 different variations of “on fire”.

Play with new tech. I’d even argue that there needs to be sprint time dedicated to testing and playing with new stuff. Keep pushing forward and learning, but for what you put in production, pick the boring stuff, the tech you know you can count on.

Resources like Thoughtworks’ Tech Radar are really useful for figuring out which technologies are in the sweet spot of new enough to be relevant but established enough that you won’t be out on an island. If you’re in the enterprise space, resources like Gartner are also handy.

This is the stuff that separates kids from adults, and the successful from the burnt-out husks. Choosing technologies responsibly isn’t sexy or exciting, but it’s wise, which is discounted far too often. It also shows kindness to your teammates and co-workers – the people who are ultimately going to have to deal with the downstream effects of the decisions you make.

Categories
TIL

TIL: How to exclude specific Boto errors

Some background:

I previously wrote a python lambda function that copies AWS RDS snapshots to a different region. This has been working for months but recently started throwing this obfuscated error:

An error occurred (DBSnapshotAlreadyExists) when calling the CopyDBSnapshot operation: Cannot create the snapshot because a snapshot with the identifier copy-snap-of-TIMESTAMP-DB_NAME already exists.

Thinking this might be due to some timestamp shenanigans, I looked at the Cloudwatch Events trigger for the lambda and saw that there were two triggers instead of the original one that I setup. Both were scheduled for the same time. I deleted the new one and waited until the next day to see if the error re-occurred, which it did.

Looking through the Cloudwatch errors, even though the second trigger was gone, the lambda was still trying to execute twice. I’ve filed a support ticket with AWS, but in the meantime, needed to silence the false positives to keep people from getting paged.

The error handling had been done as:

except botocore.exceptions.ClientError as e:    
    raise Exception("Could not issue copy command: %s" % e)

Initially, I tried:

except botocore.exceptions.ClientError as e:
   if 'DBSnapshotAlreadyExists' in e:
       pass
   else:      
       raise Exception("Could not issue copy command: %s" % e)

Instead, I had to do:

except botocore.exceptions.ClientError as e:
   if e.response['Error']['Code'] == 'DBSnapshotAlreadyExists':
       pass
   else:      
       raise Exception("Could not issue copy command: %s" % e)

Which works.