dannyman.toldme.com


Ganeti, Linux, Technical

Ganeti: Segregate VMs from Running on the Same Hardware

Link: https://dannyman.toldme.com/2017/02/16/ganeti-exclusion-tags/

We have been using this great VM management software called Ganeti. It was developed at Google and I love it for the following reasons:

It is frustrating that relatively few people know about and use Ganeti, especially in the Silicon Valley.

Recently I had an itch to scratch. At the recent Ganeti Conference I heard tell that one could use tags to tell Ganeti to keep instances from running on the same node. This is another excellent feature: if you have two or more web servers, for example, you don’t want them to end up getting migrated to the same hardware.

Unfortunately, the documentation is a little obtuse, so I posted to the ganeti mailing list, and got the clues lined up.

First, you set a cluster exclusion tag, like so:

sudo gnt-cluster add-tags htools:iextags:role

This says “set up an exclusion tag, called role

Then, when you create your instances, you add, for example: --tags role:prod-www

The instances created with the tag role:prod-www will be segregated onto different hardware nodes.

I did some testing to figure this out. First, as a control, create a bunch of small test instances:

sudo gnt-instance add ... ganeti-test0
sudo gnt-instance add ... ganeti-test1
sudo gnt-instance add ... ganeti-test2
sudo gnt-instance add ... ganeti-test3
sudo gnt-instance add ... ganeti-test4

Results:

$ sudo gnt-instance list | grep ganeti-test
ganeti-test0 kvm snf-image+default ganeti06-29 running 1.0G
ganeti-test1 kvm snf-image+default ganeti06-29 running 1.0G
ganeti-test2 kvm snf-image+default ganeti06-09 running 1.0G
ganeti-test3 kvm snf-image+default ganeti06-32 running 1.0G
ganeti-test4 kvm snf-image+default ganeti06-24 running 1.0G

As expected, some overlap in service nodes.

Next, delete the test instances, set a cluster exclusion tag for “role” and re-create the instances:

sudo gnt-cluster add-tags htools:iextags:role
sudo gnt-instance add ... --tags role:ganeti-test ganeti-test0
sudo gnt-instance add ... --tags role:ganeti-test ganeti-test1
sudo gnt-instance add ... --tags role:ganeti-test ganeti-test2
sudo gnt-instance add ... --tags role:ganeti-test ganeti-test3
sudo gnt-instance add ... --tags role:ganeti-test ganeti-test4

Results?

$ sudo gnt-instance list | grep ganeti-test
ganeti-test0 kvm snf-image+default ganeti06-29 running 1.0G
ganeti-test1 kvm snf-image+default ganeti06-09 running 1.0G
ganeti-test2 kvm snf-image+default ganeti06-32 running 1.0G
ganeti-test3 kvm snf-image+default ganeti06-24 running 1.0G
ganeti-test4 kvm snf-image+default ganeti06-23 running 1.0G

Yay! The instances are allocated to five distinct nodes!

But am I sure I understand what I am doing? Nuke the instances and try another example: 2x “www” instances and 3x “app” instances:

sudo gnt-instance add ... --tags role:prod-www ganeti-test0
sudo gnt-instance add ... --tags role:prod-www ganeti-test1
sudo gnt-instance add ... --tags role:prod-app ganeti-test2
sudo gnt-instance add ... --tags role:prod-app ganeti-test3
sudo gnt-instance add ... --tags role:prod-app ganeti-test4

What do we get?
$ sudo gnt-instance list | grep ganeti-test
ganeti-test0 kvm snf-image+default ganeti06-29 running 1.0G # prod-www
ganeti-test1 kvm snf-image+default ganeti06-09 running 1.0G # prod-www
ganeti-test2 kvm snf-image+default ganeti06-29 running 1.0G # prod-app
ganeti-test3 kvm snf-image+default ganeti06-32 running 1.0G # prod-app
ganeti-test4 kvm snf-image+default ganeti06-24 running 1.0G # prod-app

Yes! The first two instances are allocated to different nodes, then when the tag changes to prod-app, ganeti goes back to ganeti06-29 to allocate an instance.

Feedback Welcome


Linux, Technical, Technology

Duct Tape Ops

Link: https://dannyman.toldme.com/2017/01/27/duct-tape-ops/

Threading specification, via Wikipedia

Yesterday we tried out Slack’s new thread feature, and were left scratching our heads over the utility of that. Someone mused that Slack might be running out of features to implement, and I recalled Zawinski’s Law:

Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.

I think this is a tad ironic for Slack, given that some people believe that Slack makes email obsolete and useless. Anyway, I had ended up on Jamie Zawiski’s (jwz) Wikipedia entry and there was this comment about jwz’s law:

Eric Raymond comments that while this law goes against the minimalist philosophy of Unix (a set of “small, sharp tools”), it actually addresses the real need of end users to keep together tools for interrelated tasks, even though for a coder implementation of these tools are clearly independent jobs.

This led to The Duct Tape Programmer, which I’ll excerpt:

Sometimes you’re busy banging out the code, and someone starts rattling on about how if you use multi-threaded COM apartments, your app will be 34% sparklier, and it’s not even that hard, because he’s written a bunch of templates, and all you have to do is multiply-inherit from 17 of his templates, each taking an average of 4 arguments … your eyes are swimming.

And the duct-tape programmer is not afraid to say, “multiple inheritance sucks. Stop it. Just stop.”

You see, everybody else is too afraid of looking stupid … they sheepishly go along with whatever faddish programming craziness has come down from the architecture astronauts who speak at conferences and write books and articles and are so much smarter than us that they don’t realize that the stuff that they’re promoting is too hard for us.

“At the end of the day, ship the fucking thing! It’s great to rewrite your code and make it cleaner and by the third time it’ll actually be pretty. But that’s not the point—you’re not here to write code; you’re here to ship products.”

jwz wrote a response in his blog:

To the extent that he puts me up on a pedestal for merely being practical, that’s a pretty sad indictment of the state of the industry.

In a lot of the commentary surrounding his article elsewhere, I saw all the usual chestnuts being trotted out by people misunderstanding the context of our discussions: A) the incredible time pressure we were under and B) that it was 1994. People always want to get in fights over the specifics like “what’s wrong with templates?” without realizing the historical context. Guess what, you young punks, templates didn’t work in 1994.

As an older tech worker, I have found that I am more “fad resistant” than I was in my younger days. There’s older technology that may not be pretty but I know it works, and there’s new technology that may be shiny, but immature, and will take a lot of effort to get working. As time passes, shiny technology matures and becomes more practical to use.

(I am looking forward to trying “Kubernetes in a Can”)

Feedback Welcome


Technical

A Complex Cron Entry Explained

Link: https://dannyman.toldme.com/2017/01/16/crontab-backtick-returncode-conditional-command/

A crontab entry I felt worth explanation, as it illustrates a few Unix concepts:

$ crontab -l
# m h dom mon dow command
*/5 8-18 * * mon-fri grep -q open /proc/acpi/button/lid/LID0/state && mkdir -p /home/djh/Dropbox/webcam/`date +%Y%m%d` && fswebcam -q -d /dev/video0 -r 1920x1080 /home/djh/Dropbox/webcam/%Y%m%d/%H%M.jpg ; file /home/djh/Dropbox/webcam/`date +%Y%m%d/%H%M`.jpg | grep -q 1920x1080 || fswebcam -q -d /dev/video1 -r 1920x1080 /home/djh/Dropbox/webcam/%Y%m%d/%H%M.jpg

That is a huge gob of text. Let me un-pack that into parts:

*/5 8-18 * * mon-fri

Run weekdays, 8AM to 6PM, every five minutes.

grep -q open /proc/acpi/button/lid/LID0/state &&
mkdir -p /home/djh/Dropbox/webcam/`date +%Y%m%d` &&
fswebcam -q -d /dev/video0 -r 1920x1080 /home/djh/Dropbox/webcam/%Y%m%d/%H%M.jpg

Check if the lid on the computer is open by looking for the word “open” in /proc/acpi/button/lid/LID0/state AND
(… if the lid is open) make a directory with today’s date AND
(… if the directory was made) take a timestamped snapshot from the web cam and put it in that folder.

Breaking it down a bit more:

grep -q means grep “quietly” … we don’t need to print that the lid is open, we care about the return code. Here is an illustration:

$ while true; do grep open /proc/acpi/button/lid/LID0/state ; echo $? ; sleep 1; done
state: open
0
state: open
0
# Lid gets shut
1
1
# Lid gets opened
state: open
0
state: open
0

The $? is the “return code” from the grep command. In shell, zero means true and non-zero means false, that allows us to conveniently construct conditional commands. Like so:

$ while true; do grep -q open /proc/acpi/button/lid/LID0/state && echo "open lid: take a picture" || echo "shut lid: take no picture" ; sleep 1; done
open lid: take a picture
open lid: take a picture
shut lid: take no picture
shut lid: take no picture
open lid: take a picture
open lid: take a picture

There is some juju in making the directory:

mkdir -p /home/djh/Dropbox/webcam/`date +%Y%m%d`

First is the -p flag. That would make every part of the path, if needed (Dropbox .. webcam ..) but it also makes mkdir chill if the directory already exist:

$ mkdir /tmp ; echo $?
mkdir: cannot create directory ‘/tmp’: File exists
1
$ mkdir -p /tmp ; echo $?
0

Then there is the backtick substitition. The date command can format output (read man date and man strftime …) You can use the backtick substitution to stuff the output of one command into the input of another command.

$ date +%A
Monday
$ echo "Today is `date +%A`"
Today is Monday

Once again, from the top:

grep -q open /proc/acpi/button/lid/LID0/state &&
mkdir -p /home/djh/Dropbox/webcam/`date +%Y%m%d` &&
fswebcam -q -d /dev/video0 -r 1920x1080 /home/djh/Dropbox/webcam/%Y%m%d/%H%M.jpg ;
file /home/djh/Dropbox/webcam/`date +%Y%m%d/%H%M`.jpg | grep -q 1920x1080 ||
fswebcam -q -d /dev/video1 -r 1920x1080 /home/djh/Dropbox/webcam/%Y%m%d/%H%M.jpg

Here is where it gets involved. There are two cameras on this mobile workstation. One is the internal camera, which can do 720 pixels, and there is an external camera, which can do 1080. I want to use the external camera, but there is no consistency for the device name. (The external device is video0 if it is present at boot, else it is video1.)

Originally, I wanted to do like so:

fswebcam -q -d /dev/video0 -r 1920x1080 || fswebcam -q -d /dev/video1 -r 1920x1080

Unfortunately, fswebcam is a real trooper: if it can not take a picture at 1920×1080, it will take what picture it can and output that. This is why the whole cron entry reads as:

Check if the lid on the computer is open AND
(… if the lid is open) make a directory with today’s date AND
(… if the directory was made) take a timestamped snapshot from web cam 0
Check if the timestamped snapshot is 1920×1080 ELSE
(… if the snapshot is not 1920×1080) take a timestamped snapshot from web cam 1

Sample output from webcam. Happy MLK Day.

Why am I taking these snapshots? I do not really know what I might do with them until I have them. Modern algorithms could analyze “time spent at workstation” and give feedback on posture, maybe identify “mood” and correlate that over time … we’ll see.

OR not.

Feedback Welcome


Technical, Technology

It Wasn’t That Bad

Link: https://dannyman.toldme.com/2016/12/29/it-wasnt-that-bad/

Friend: Dang it Apple my iPhone upgrade bricked the phone and I had to reinstall from scratch. This is a _really_ bad user experience.

Me: If you can re-install the software, the phone isn’t actually “bricked” …

Friend: I had to do a factory restore through iTunes.

Me: That’s not bricked that’s just extremely awful software.

(Someone else mentions Windows.)

Me: Never had this problem with an Android device. ;)

Friend: With Android phones you are constantly waiting on the carriers or handset makers for updates.

Me: That is why I buy my phones from Google.

Friend: Pixel looks enticing, I still like iPhone better. I am a firm believer that people stick with what they know, and you are unlikely to sway them if it works for them.

Me: Yeah just because you have to reinstall your whole phone from scratch doesn’t make it a bad experience.

Feedback Welcome


FreeBSD, Linux, Python, Technical, Technology

VMs vs Containers

Link: https://dannyman.toldme.com/2016/08/24/vms-vs-containers/

I’ve been a SysAdmin for … since the last millennium. Long enough to see certain fads come and go and come again. There was a time when folks got keen on the advantages of chroot jails, but that time faded, then resurged in the form of containers! All the rage!

My own bias is that bare metal systems and VMs are what I am used to: a Unix SysAdmin knows how to manage systems! The advantages and desire for more contained environments seems to better suit certain types of programmers, and I suspect that the desire for chroot-jail-virtualenv-containers may be a reflection of programming trends.

On the one hand, you’ve got say C and Java … write, compile, deploy. You can statically link C code and put your Java all in a big jar, and then to run it on a server you’ll need say a particular kernel version, or a particular version of Java, and some light scaffolding to configure, start/stop and log. You can just write up a little README and hand that stuff off to the Ops team and they’ll figure out the mysterious stuff like chmod and the production database password. (And the load balancer config..eek!)

On the other hand, if you’re hacking away in an interpreted language: say Python or R, you’ve got a growing wad of dependencies, and eventually you’ll get to a point where you need the older version of one dependency and a bleeding-edge version of another and keeping track of those dependencies and convincing the OS to furnish them all for you … what comes in handy is if you can just wad up a giant tarball of all your stuff and run it in a little “isolated” environment. You don’t really want to get Ops involved because they may laugh at you or run in terror … instead you can just shove the whole thing in a container, run that thing in the cloud, and now without even ever having to understand esoteric stuff like chmod you are now DevOps!

(Woah: Job Security!!)

From my perspective, containers start as a way to deploy software. Nowadays there’s a bunch of scaffolding for containers to be a way to deploy and manage a service stack. I haven’t dealt with either case, and my incumbent philosophy tends to be “well, we already have these other tools” …

Container Architecture (CC: Wikipedia)

Container Architecture is basically just Legos mixed with Minecraft (CC: Wikipedia)

Anyway, as a Service Provider (… I know “DevOps” is meant to get away from that ugly idea that Ops is a service provider …) I figure if containers help us ship the code, we’ll get us some containers, and if we want orchestration capabilities … well, we have what we have now and we can look at bringing up other new stuff if it will serve us better.

ASIDE: One thing that has put me off containers thus far is not so much that they’re reinventing the wheel, so much that I went to a DevOps conference a few years back and it seemed every single talk was about how we have been delivered from the evil sinful ways of physical computers and VMs and the tyranny of package managers and chmod and load balancers and we have found the Good News that we can build this stuff all over in a new image and it will be called Docker or Mesos or Kubernetes but careful the API changed in the last version but have you heard we have a thing called etcd which is a special thing to manage your config files because nobody has ever figured out an effective way to … honestly I don’t know for etcd one way or another: it was just the glazed, fervent stare in the eyes of the guy who was explaining to me the virtues of etcd …

It turns out it is not just me who is a curmudgeonly contrarian: a lot of people are freaked out by the True Believers. But that needn’t keep us from deploying useful tools, and my colleague reports that Kubernetes for containers seems awfully similar to the Ganeti we are already running for VMs, so let us bootstrap some infrastructure and provide some potentially useful services to the development team, shall we?

Feedback Welcome


Technical, Technology, Testimonials, WordPress

Testimonial: SSLMate

Link: https://dannyman.toldme.com/2016/07/19/testimonial-sslmate/

I recently started using sslmate to manage SSL certificates. SSL is one of those complicated things you deal with rarely so it has historically been a pain in the neck.

But sslmate makes it all easy … you install the sslmate command and can generate, sign, and install certificates from the command-line. You then have to check your email when getting a signed cert to verify … and you’re good.

The certificates auto-renew annually, assuming you click the email. I did this for an important cert yesterday. Another thing you do (sslmate walks you through all these details) is set up a cron.

This morning at 6:25am the cron got run on our servers … with minimal intervention (I had to click a confirmation link on an email yesterday) our web servers are now running on renewed certs …. one less pain in the neck.

So … next time you have to deal with SSL I would say “go to sslmate.com and follow the instructions and you’ll be in a happy place.”

Feedback Welcome


Ansible, Linux, Technical

Ansible: Copy Agent Keys to Remote Servers

Link: https://dannyman.toldme.com/2016/07/01/ansible-use-ssh-add-to-set-authorized_key/

Background: you use SSH and ssh-agent and you can get a list of keys you presently have “ready to fire” via:

djh@djh-MBP:~/devops$ ssh-add -l
4096 SHA256:JtmLhsoPoSfBsFnrIsZc6XNScJ3ofghvpYmhYGRWwsU .ssh/id_ssh (RSA)

Aaaand, you want to set up passwordless SSH for the remote hosts in your Ansible. There are lots of examples that involve file lookups for blah blah blah dot pub but why not just get a list from the agent?

A playbook:

- hosts: all
  gather_facts: no
  tasks:
    - name: Get my SSH public keys
      local_action: shell ssh-add -L
      register: ssh_keys

    - name: List my SSH public keys
      debug: msg="{{ ssh_keys.stdout }}"

    - name: Install my SSH public keys on Remote Servers
      authorized_key: user={{lookup('env', 'USER')}} key="{{item}}"
      with_items: "{{ ssh_keys.stdout }}"

This is roughly based on a Stack Overflow answer.

The two tricky bits are:
1) Running a local_action to get a list of SSH keys.
2) Doing with_items to iterate if there are multiple keys.

A bonus tricky bit:
3) You may need to install sshpass if you do not already have key access to the remote servers. Last I knew, the brew command on Mac OS will balk at you for trying to install this.

Feedback Welcome


Linux, News and Reaction, Technology

Ubuntu 16.04 Reactions

Link: https://dannyman.toldme.com/2016/04/21/ubuntu-16-04-reactions/

Xerus: an African ground squirrel.

Xerus: an African ground squirrel. CC: Wikipedia

I have misplaced my coffee mug. I’m glad to hear Ubuntu 16.04 LTS is out. “Codenamed ‘Xenial Xerus'” because computer people don’t already come off as a bunch of space cadets. Anyway, an under-caffeinated curmudgeon’s take:

The Linux kernel has been updated to the 4.4.6 longterm maintenance
release, with the addition of ZFS-on-Linux, a combination of a volume
manager and filesystem which enables efficient snapshots, copy-on-write
cloning, continuous integrity checking against data corruption, automatic
filesystem repair, and data compression.

Ah, ZFS! The last word in filesystems! How very exciting that after a mere decade we have stable support for it on Linux.

There’s a mention of the desktop: updates to LibreOffice and “stability improvements to Unity.” I’m not going to take that bait. No sir.

Ubuntu Server 16.04 LTS includes the Mitaka release of OpenStack, along
with the new 2.0 versions of Juju, LXD, and MAAS to save devops teams
time and headache when deploying distributed applications – whether on
private clouds, public clouds, or on developer laptops.

I honestly don’t know what these do, but my hunch is that they have their own overhead of time and headache. Fortunately, I have semi-automated network install of servers, Ganeti to manage VMs, and Ansible to automate admin stuff, so I can sit on the sidelines for now and hope that by the time I need it, Openstack is mature enough that I can reap its advantages with minimal investment.

Aside: My position on containers is the same position I have on Openstack, though I’m wondering if the containers thing may blow over before full maturity. Every few years some folks get excited about the possibility of reinventing their incumbent systems management paradigms with jails, burn a bunch of time blowing their own minds, then get frustrated with the limitations and go back to the old ways. We’ll see.

Anyway, Ubuntu keeps delivering:

Ubuntu 16.04 LTS introduces a new application format, the ‘snap’, which
can be installed alongside traditional deb packages. These two packaging
formats live quite comfortably next to one another and enable Ubuntu to
maintain its existing processes for development and updates.

YES YES YES YES YES YES YES OH snap OH MY LERD YES IF THERE IS ONE THING WE DESPERATELY NEED IT IS YET ANOTHER WAY TO MANAGE PACKAGES I AM TOTALLY SURE THESE TWO PACKAGING FORMATS WILL LIVE QUITE COMFORTABLY TOGETHER next to the CPANs and the CRANs and the PIPs and the … don’t even ask how the R packages work …

Further research reveals that they’ve replaced Python 2 with Python 3. No mention of that in the email announcement. I’m totally sure this will not yield any weird problems.

Feedback Welcome


Technical

Divine the “Changelog” From Git

Link: https://dannyman.toldme.com/2016/04/12/divine-the-changelog-from-git/

I am having a tricky time with Ganeti, and the mailing list is not proving helpful. One factor is that I have two different versions in play. How does one divine the differences between these versions?

Git to the rescue! Along these lines:

git clone git://git.ganeti.org/ganeti.git # Clone the repo ...
cd ganeti
git branch -a                             # See what branches we have
git ls-remote --tags                      # See what tags we have
git checkout tags/v2.12.4                 # Check out the "old" branch/tag
git diff tags/v2.12.6                     # Diff "old" vs "new" branch/tag
# OH WAIT, IT IS EVEN EASIER THAN THIS! (Thanks, candlerb!)
# You don't hack to check out a branch, just do this:
git diff v2.12.4 v2.12.6                  # Diff "old" vs "new"

And now I see the “diff” between 2.12.4 and 2.12.6, and the changes seem relevant to my issue.

Feedback Welcome


Technical, Technology

Technology Journey Back in Time

Link: https://dannyman.toldme.com/2016/03/24/technology-journey-back-in-time/

It started when Tom Limoncelli shared a link to teens reacting to Windows 95.

In my mind, what is most unfortunate about that setup, is they did not get to experience Dial Up Networking via a modem. I think they would have been truly blown away. Alas, the Internet contains wonders, like this guy getting a 50 year old modem to work:

What could be more amazing than that?  How about this guy, with a 50 year old modem and a teletype, browsing the first web site via the first web browser, by means of a punch tape bookmark?

You’re welcome, nerds!

Feedback Welcome


Technical

Load Balancer Config: when “Not Authorized” means “Yes”

Link: https://dannyman.toldme.com/2016/02/25/http-auth-ssl-load-balancer-401/

I wanted to share a clever load balancer config strategy I accidentally discovered. The use case is you want to make a web service available to clients on the Internet. Two things you’ll need are:

1) an authentication mechanism
2) encrypted transport (HTTPS)

You can wrap authentication around an arbitrary web app with HTTP auth. Easy and done.

For encrypted transport of web traffic, I now love sslmate is the greatest thing since sliced bread. Why?

1) Inexpensive SSL certs.
2) You order / install the certs from a command line.
3) They feed you the conf you probably need for your software.
4) Then you can put the auto-renew in cron.

So, for example, an nginx set up to answer on port 443, handle the SSL connection, do http auth, then proxy over to the actual service, running on port 12345:

server {
    listen 443;
    server_name example.com;
    location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;
        proxy_pass         http://127.0.0.1:12345;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        auth_basic "Restricted";                    #For Basic Auth
        auth_basic_user_file /etc/nginx/.htpasswd;  #For Basic Auth
    }
    # Sample config from https://sslmate.com/help/buy
    ssl on;
    ssl_certificate_key /etc/sslmate/example.com.key;
    ssl_certificate /etc/sslmate/example.com.chained.crt;
    # Recommended security settings from https://wiki.mozilla.org/Security/Server_Side_TLS
    # &c.
}

The clever load balancer config? The health check is to hit the server(s) in the pool, request / via HTTPS, and expect a 401 response. The load balancer doesn’t know the application password, so if you don’t let it in, you must be doing something right. If someone mucks with the server configuration and disables HTTP AUTH, then the load-balancer will get 200 on its health checks, regard success as an error, and “fail safe” by taking the server out of the pool, thus preventing people from accessing the site without a password.

Tell the load balancer that success is not an acceptable outcome

Tell the load balancer that success is not an acceptable outcome

Feedback Welcome


Linux, Technical

Tech Tip: Self-Documenting Config Files

Link: https://dannyman.toldme.com/2016/02/09/tech-tip-self-documenting-config-files/

One of my personal “best practices” is to leave myself and my colleagues hints as to how to get the job done. Plenty of folks may be aware that they need to edit /etc/exports to add a client to an NFS server. I would guess that the filename and convention is decades old, but who among us, even the full-time Unix guy, recalls that you then need to reload the nfs-kernel-server process?

For example:

0-11:04 djh@fs0 ~$ head -7 /etc/exports
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# ***** HINT: After you edit this file, do: *****
#       sudo service nfs-kernel-server reload
# ***** HINT: run the command on the previous line! *****
#

Feedback Welcome


Linux, Technical, Technology

Windows 10

Link: https://dannyman.toldme.com/2016/01/02/windows-10/

The other day I figured to browse Best Buy. I spied a 15″ Toshiba laptop, the kind that can pivot the screen 180 degrees into a tablet. With a full sized keyboard. And a 4k screen. And 12GB of RAM. For $1,000. The catch? A non-SSD 1TB hard drive and stock graphics. And … Windows 10.

But it appealed to me because I’ve been thinking I want a computer I can use on the couch. My home workstation is very nice, a desktop with a 4k screen, but it is very much a workstation. Especially because of the 4k screen it is poorly suited to sitting back and browsing … so, I went home, thought on it over dinner, then drove back to the store and bought a toy. (Oh boy! Oh boy!!)

Every few years I flirt with Microsoft stuff — trying to prove that despite the fact I’m a Unix guy I still have an open mind. I almost usually throw up my hands in exasperation after a few weeks. The only time I ever sort of appreciated Microsoft was around the Windows XP days, it was a pretty decent OS managing folders full of pictures. A lot nicer than OS X, anyway.

This time, out of the gate, Windows 10 was a dog. The non-SSD hard drive slowed things down a great deal. Once I got up and running though, it isn’t bad. It took a little getting used to the sluggishness — a combination of my adapting to the trackpad mouse thing and I swear that under load the Windows UI is less responsive than what I’m used to. The 4k stuff works reasonably well … a lot of apps are just transparently pixel-doubled, which isn’t always pretty but it beats squinting. I can flip the thing around into a landscape tablet — which is kind of nice, though, given its size, a bit awkward — for reading. I can tap the screen or pinch around to zoom text. The UI, so far, is back to the good old Windows-and-Icons stuff old-timers like me are used to.

Mind you, I haven’t tried anything as nutty as setting up OpenVPN to auto-launch on user login. Trying to make that happen for one of my users at work on Windows 8 left me twitchy for weeks afterward.

Anyway, a little bit of time will tell .. I have until January 15 to make a return. The use case is web browsing, maybe some gaming, and sorting photos which are synced via Dropbox. This will likely do the trick. As a little bonus, McAfee anti-virus is paid for for the first year!

I did try Ubuntu, though. Despite UEFI and all the secure boot crud, Ubuntu 15.10 managed the install like it was nothing, re-sizing the hard drive and all. No driver issues … touchscreen even worked. Nice! Normally, I hate Unity, but it is okay for a casual computing environment. Unlike Windows 10, though, I can’t three-finger-swipe-up to show all the windows. Windows+W will do that but really … and I couldn’t figure out how to get “middle mouse button” working on the track pad. For me, probably 70% of why I like Unix as an interface is the ease of copy-paste.

But things got really dark when I tried to try KDE and XFCE. Installing either kubuntu-desktop or xubuntu-desktop actually made the computer unusable. The first had a weird package conflict that caused X to just not display at all. I had to boot into safe mode and manually remove the kubuntu dependencies. The XFCE was slightly less traumatic: it just broke all the window managers in weird ways until I again figured out how to manually remove the dependencies.

It is just as easy to pull up a Terminal on Windows 10 or Ubuntu … you hit Start and type “term” but Windows 10 doesn’t come with an SSH client, which is all I really ask. From what I can tell, my old friend PuTTY is still the State of the Art. It is like the 1990s never died.

Ah, and out of the gate, Windows 10 allows you multiple desktops. Looks similar to Mac. I haven’t really played with it but it is a heartening sign.

And the Toshiba is nice. If I return it I think I’ll look for something with a matte screen and maybe actual buttons around the track pad so that if I do Unix it up, I can middle-click. Oh, and maybe an SSD and nicer graphics … but you can always upgrade the hard drive after the fact. I prefer matte screens, and being a touch screen means this thing hoovers up fingerprints faster than you can say chamois.

Maybe I’ll try FreeBSD on the Linux partition. See how a very old friend fares on this new toy. :)

Feedback Welcome


FreeBSD, Linux, Technical

Variability of Hard Drive Speed

Link: https://dannyman.toldme.com/2015/11/11/variability-of-hard-drive-speed/

Munin gives me this beautiful graph:

Read speed starts around 140MB/s then trails off linearly toward 100MB/s over the four hours it takes to read the entire disk.

This is the result of:

$ for s in a b c d; do echo ; echo sd${s} ; sudo dd if=/dev/sd${s} of=/dev/null; done                                                

sda
3907029168+0 records in
3907029168+0 records out
2000398934016 bytes (2.0 TB) copied, 17940.3 s, 112 MB/s

sdb
3907029168+0 records in
3907029168+0 records out
2000398934016 bytes (2.0 TB) copied, 15457.9 s, 129 MB/s

sdc
3907029168+0 records in
3907029168+0 records out
2000398934016 bytes (2.0 TB) copied, 15119.4 s, 132 MB/s

sdd
3907029168+0 records in
3907029168+0 records out
2000398934016 bytes (2.0 TB) copied, 16689.7 s, 120 MB/s

The back story is that I had a system with two bad disks, which seems a little weird. I replaced the disks and I am trying to kick some tires before I put the system back into service. The loop above says “read each disk in turn in its entirety.” Prior to replacing the disks, a loop like the above would cause the bad disks, sdb, and sdc, to abort read before completing the process.

The disks in this system are 2TB 7200RPM SATA drives. sda and sdd are Western Digital, while sdb and sdc are HGST. This is in no way intended as a benchmark, but what I appreciate is the consistent pattern across the disks: throughput starts high and then gradually drops over time. What is going on? Well, these are platters of magnetic discs spinning at a constant speed. When you read a track on the outer part of the platter, you get more data than when you read from closer to the center.

I appreciate the clean visual illustration of this principle. On the compute cluster, I have noticed that we are more likely to hit performance issues when storage capacity gets tight. I had some old knowledge from FreeBSD that at a certain threshold, the OS optimizes disk writes for storage versus speed. I don’t know if Linux / ext4 operates that way. It is reassuring to understand that, due to the physical properties, traditional hard drives slow down as they fill up.

Feedback Welcome


About Me, Technical, Technology

IT Strategy: Lose the Religion

Link: https://dannyman.toldme.com/2015/10/29/it-strategy-lose-the-religion/

High Scalability asks “What Ideas in IT Must Die?” My own response . . .

I have been loath to embrace containers, especially since I attended a conference that was supposed to be about DevOps but was 90% about all the various projects around Docker and the like. I worked enough with Jails in the past two decades to feel exasperation at the fervent religious belief of the advantages of reinventing an old wheel.

I attended a presentation about Kubernetes yesterday. Kubernetes is an orchestration tool for containers that sounds like a skin condition, but I try to keep an open mind. “Watch how fast I can re-allocate and scale my compute resources!” Well, I can do that more slowly but conveniently enough with my VM and config management tools . . .

. . . but I do see potential utility in that containers could offer a simpler deployment process for my devs.

There was an undercurrent there that Kubernetes is the Great New Religion that Will Unify All the Things. I used to embrace ideas like that, then I got really turned off by thinking like that, and now I know enough to see through the True Beliefs. I could deploy Kubernetes as an offering of my IT “Service Catalog” as a complimentary option versus the bare metal, hadoop clusters, VM, and other services I have to offer. It is not a Winner Take All play, but an option that could improve productivity for some of our application deployment needs.

At the end of the day, as an IT Guy, I need to be a good aggregator, offering my users a range of solutions and helping them adopt more useful tools for their needs. My metrics for success are whether or not my solutions work for my users, whether they further the mission of my enterprise, and whether they are cost-effective, in terms of time and money.

Feedback Welcome

Older Stuff »
Site Archive