gameofworlds

Drawing as a programmer

gameofworlds:

Today I want to tell my story of how drawing helps me write better code.

No more than 1.5 years ago I didn’t know how to draw anything more complex than a human-like figure made from 5 lines and 1 circle. Nor did I believe that I ever could or will. I was wrong.

image

If you can draw this, you can draw anything.


One day Hacker News had a nice article about books that help you improve in unusual ways, self-help books without intentionally being about self-help. The article had a nice pick of titles, but the most promising one was ‘Drawing on the Right Side of the Brain’, because the idea that drawing is actually easy, pitched right through my mind.

And when I finally got it and started to read… magic happened. This book is one of the best ‘how-to’ books ever written, and it does its job in a really special way. It doesn’t show you drawing techniques and it doesn’t want you to draw simple shapes as you would think. It starts by showing that you can draw with simple exercises, and reinforcing your confidence about your drawing ability forever. And that was all I ever needed.

It really comes as a revelation.

I went through the book, finished almost every exercise, and stopped. I stopped because the book already fulfilled my desire - desire to know that I am not hopeless in a field of drawing. I didn’t know what to do with my newfound skill, so I switched on to what I’ve done in my spare time before - coding the game this blog is about. And I didn’t draw. Until about 3 months ago.

You see, when you’re working on a video game, you naturally play and analyze a lot of other video games, just to be a better video game designer. And when you’re playing other video games, especially indie ones, your mind sometimes comes with the notion: ‘Wow, this is really nice art, I wish I could draw like that’, and then instantly to: ‘But there is nothing that stops me, because I know I can draw after that awesome book’. And, after a couple of strokes like this, I just couldn’t keep myself from pencil and paper anymore.

I started sketching again. At first I did it after job hours, in my spare time, but then I noticed that, after acquiring some basic knack, I can draw simple sketches quickly, so I tried to have drawing breaks when I got stuck with a new coding problem in my head. And to my surprise, my productivity rose.

Every software engineer worth a dime knows that programming is more about thinking, than typing code (and if you do not agree, you should probably go do copywriting or something). When you work on a hard problem, you think, think, think, read an article on your current topic, think, maybe do some tinkering here and there, think again, get an ‘AHA’ moment, and then, only then do typing.

But there is a subtle problem in this approach, at least for me. I can procrastinate between parts. Because focused thinking is hard, and checking email and twitter feed is easy. It’s a known problem in the field, and I consider myself in a constant state of war with my slackier self, employing useful weapons, which sadly do not address the core of the problem, but help to focus a lot nevertheless. And drawing is a latest weapon in my armory.

So now I do one or two daily drawing breaks, when I feel tired and in a need of some mental replenishment. I draw simple sketches, copying images I like, or just doodling around. I give myself 20 minutes max, and that is more than enough in most cases. And I feel better after that.

image

2 breaks x 20 minutes = this pic and a less tired mind

I do not know why it is working for me, but I think main two reasons are:

  1. Drawing doesn’t break the workflow. Drawing is work too, just a different kind. Maybe even symmetrically different to logical work as programming. Reading twitter feed however can break your workflow faster than a sledgehammer breaks a bulls skull.
  2. Drawing ‘uses’ different parts of the brain than programming, and the brain kind of sorts out your previous thoughts while you draw. This is absolutely unscientific observation, and you probably shouldn’t believe me. But I still think it does.

Recreation is not the only reason I draw, but it’s certainly a big one. And it helps other causes. Hope you enjoyed the read!

parislemon

parislemon:

Tom Gara spoke with MasterCard’s Carolyn Balfany about the new chip-based payment system known as EMV:

Part of the October 2015 deadline in our roadmap is what’s known as the ‘liability shift.’ Whenever card fraud happens, we need to determine who is liable for the costs. When the liability shift happens, what will change is that if there is an incidence of card fraud, whichever party has the lesser technology will bear the liability.

So if a merchant is still using the old system, they can still run a transaction with a swipe and a signature. But they will be liable for any fraudulent transactions if the customer has a chip card. And the same goes the other way – if the merchant has a new terminal, but the bank hasn’t issued a chip and PIN card to the customer, the bank would be liable.

The key point of a liability shift is not actually to shift liability around the market. It’s to create co-ordination in the market, so you have issuers and merchants investing in the migration at the same time. This way, we’re not shifting fraud around within the system; we’re driving fraud out of the system.

That’s an interesting way to make everyone is very much incentivized to update their systems. It sure seems like it will work. We’ll see. It’s ridiculous that the United States isn’t using the chips already.

Wait, America doesn’t use the chip-n-pin thing that we do in Europe?

wordbitly

10 Things We Forgot to Monitor

wordbitly:

There is always a set of standard metrics that are universally monitored (Disk Usage, Memory Usage, Load, Pings, etc). Beyond that, there are a lot of lessons that we’ve learned from operating our production systems that have helped shape the breadth of monitoring that we perform at bitly.

One of my favorite all-time tweets is from @DevOps_Borat

"Law of Murphy for devops: if thing can able go wrong, is mean is already wrong but you not have Nagios alert of it yet."

What follows is a small list of things we monitor at bitly that have grown out of those (sometimes painful!) experiences, and where possible little snippets of the stories behind those instances.

1 - Fork Rate

We once had a problem where IPv6 was intentionally disabled on a box via options ipv6 disable=1 and alias ipv6 off in /etc/modprobe.conf. This caused a large issue for us: each time a new curl object was created, modprobe would spawn, checking net-pf-10 to evaluate IPv6 status. This fork bombed the box, and we eventually tracked it down by noticing that the process counter in /proc/stat was increasing by several hundred a second. Normally you would only expect a fork rate of 1-10/sec on a production box with steady traffic.

check_fork_rate.sh

2 - flow control packets

TL;DR; If your network configuration honors flow control packets and isn’t configured to disable them, they can temporarily cause dropped traffic. (If this doesn’t sound like an outage, you need your head checked.)

$ /usr/sbin/ethtool -S eth0 | grep flow_control
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0

Note: Read this to understand how these flow control frames can cascade to switch-wide loss of connectivity if you use certain Broadcom NIC’s. You should also trend these metrics on your switch gear. While at it, watch your dropped frames.

3 - Swap In/Out Rate

It’s common to check for swap usage above a threshold, but even if you have a small quantity of memory swapped, it’s actually the rate it’s swapped in/out that can impact performance, not the quantity. This is a much more direct check for that state.

check_swap_paging_rate.sh

4 - Server Boot Notification

Unexpected reboots are part of life. Do you know when they happen on your hosts? Most people don’t. We use a simple init script that triggers an ops email on system boot. This is valuable to communicate provisioning of new servers, and helps capture state change even if services handle the failure gracefully without alerting.

notify.sh

5 - NTP Clock Offset

If not monitored, yes, one of your servers is probably off. If you’ve never thought about clock skew you might not even be running ntpd on your servers. Generally there are 3 things to check for. 1) That ntpd is running, 2) Clock skew inside your datacenter, 3) Clock skew from your master time servers to an external source.

We use check_ntp_time for this check

6 - DNS Resolutions

Internal DNS - It’s a hidden part of your infrastructure that you rely on more than you realize. The things to check for are 1) Local resolutions from each server, 2) If you have local DNS servers in your datacenter, you want to check resolution, and quantity of queries, 3) Check availability of each upstream DNS resolver you use.

External DNS - It’s good to verify your external domains resolve correctly against each of your published external nameservers. At bitly we also rely on several CC TLD’s and we monitor those authoritative servers directly as well (yes, it’s happened that all authoritative nameservers for a TLD have been offline).

7 - SSL Expiration

It’s the thing everyone forgets about because it happens so infrequently. The fix is easy, just check it and get alerted with enough timeframe to renew your SSL certificates.

define command{
    command_name    check_ssl_expire
    command_line    $USER1$/check_http --ssl -C 14 -H $ARG1$
}
define service{
    host_name               virtual
    service_description     bitly_com_ssl_expiration
    use                     generic-service
    check_command           check_ssl_expire!bitly.com
    contact_groups          email_only
    normal_check_interval   720
    retry_check_interval    10
    notification_interval   720
}

8 - DELL OpenManage Server Administrator (OMSA)

We run bitly split across two data centers, one is a managed environment with DELL hardware, and the second is Amazon EC2. For our DELL hardware it’s important for us to monitor the outputs from OMSA. This alerts us to RAID status, failed disks (predictive or hard failures), RAM Issues, Power Supply states and more.

9 - Connection Limits

You probably run things like memcached and mysql with connection limits, but do you monitor how close you are to those limits as you scale out application tiers?

Related to this is addressing the issue of processes running into file descriptor limits. We make a regular practice of running services with ulimit -n 65535 in our run scripts to minimize this. We also set Nginx worker_rlimit_nofile.

10 - Load Balancer Status.

We configure our Load Balancers with a health check which we can easily force to fail in order to have any given server removed from rotation.We’ve found it important to have visibility into the health check state, so we monitor and alert based on the same health check. (If you use EC2 Load Balancers you can monitor the ELB state from Amazon API’s).

Various Other things to watch

New entries written to Nginx Error Logs, service restarts (assuming you have something in place to auto-restart them on failure), numa stats, new process core dumps (great if you run any C code).

EOL

This scratches the surface of how we keep bitly stable, but if that’s an itch you like scratching, we’re hiring.