Network Graphs: Part I

Alright, a giant Twitter rant about a monitoring system with a shitastic installer turned into a brief discussion with @neojima about graphing systems, so this brief bitty is inspired by him.

Over the years I have mentioned the work I did on network data graphs while I worked at $weDropPackets. The brief backstory, for the few of you who don’t know, I used to work for an ISP, and I was the everything guy. When I started, we were using MRTG. Now, MRTG is great for what it does, but it has some shortcomings. For starters, it can only poll and display two data points. This seems to have lead to the industry standard of all network traffic graphs: data in, and data out. It also cannot display negative numbers. While not useful for traffic data, plenty of other data points exist that can be negative numbers. MRTG, out of the box, also doesn’t scale very well at all. The base config and mode of operation regenerate the images every single time new data is ingested. This gets very disk I/O intensive very fast. There is a trick to do away with this, but given that I was faced with incoming staff members and graphs that provided very little actual, useful data for the untrained eye, I decided to start mucking about with some alternatives.

What lead me to settle on writing my own code that used RRDTool as the storage was a hideously ugly little program called drraw. While ugly, seriously ugly, it is a tremendously useful program. For one thing, it is stupid simple to set up. But more importantly, you throw RRD files at it, and it exposes, via a powerful yet intuitive web interface, all of the possible ways that RRDTool can manipulate and display that data. It then shows you how to get your web based efforts into the RRDTool command line, which I then translated into perl’s RRDtool::OO. I wasn’t about to take my perl display code, and fill it with system(“rrdtool foo bar baz….”), I have my dignity. In short, it is an amazing prototyping tool.

So, shortcomings of MRTG, and we were onboarding new people who were less technically proficient. A better analysis system for tier-1 support would be great, right? The owner didn’t even consider the possibility of doing something different, nor did I ever ask him, I just did this, and it turned out great enough that, as of at least mid 2019, they still use my code and ideas.

For the uninitiated, we were a WISP, that is Wireless Internet Service Provider in this context, the acronym has been overloaded a few times now. We used primarily Cambium’s Canopy product line for our last mile delivery to homes and business. The classic way almost all WISPs poll and display data, if they do it at all, is to track Received Signal Strength Indicator (RSSI), and Signal to Noise Ratio (SNR). RSSI is reported by the radio in decibels (dBm), which, due to the power levels involved, was a negative number. SNR is a positive number. So, we take a bit of customer data (this is from my 2014 presentation slide deck on this subject), and attempt to shoehorn these two values into something useful, here is the result. The green is the RSSI, blue is SNR.

So here we have a one week view of RSSI and SNR. Since MRTG doesn’t do negative numbers, we had to manipulate the data as it was collected, so the displayed data rate is actually offset by 86, -86 dBm being a level low enough that basically no SM will hold a session to the AP. So, zero on this graph is actually -86 dBm, but only for the green stuff. Confused yet? Incoming CSRs sure were. Because if you sign into the CPE, it will report a actual negative number, this one is probably around -68 dBm on average. The blue SNR is displayed as an offset from zero, actually zero this time. The higher the line, the more noise you have. So, that is fairly intuitive. Polling this is a pain in the butt, because you have to settle on your data values at poll time, not interpretation time, meaning the MRTG poller had to be shunted with a little perl script that manipulated the data prior to storage.

Now take a look at the chop on the 21st, what does that mean? The signal dropped, but did the customer lose service? MRTG also has a fine habit of smudging data points together when a gap exists in the actual RRD file, so, short of checking the actual database for gaps, we don’t know if it ever failed to poll the unit. Now take a look at some lesser chop on the 22nd, 24th, and 25th. That doesn’t look so bad does it? Probably just a little bad weather, but nothing service affecting.

Good lord we were wrong. When I developed a better tool, the first thing we did was start finding issues we never knew existed.

First things first, I wanted to store more than two data points, RRDTool can do this. MRTG, despite being based on RRDTool, cannot. I also want to store the actual values reported, not have to pre-manipulate them into something I can shoehorn some level of utility out of later. Since I also want to be able to write cronjobs that analyze CPE data in bulk for large scale reporting, it was be far easier if the data is actually consistently sane. Third, I want to display one hell of a lot more data in a single graph, without overloading the viewer.

Drraw prototyping had given me the knowledge and inspiration I needed. For starters, one incredibly underused feature of RRDTool is TIC marks. Not all data has to be displayed as a line graph, you can also simply display a single point of data, based on a conditional. Oh yeah, did you know RRDTool has a fully featured Reverse Polish Notation (RPN) calculator, and supports conditionals? Most people do not, but it is insanely powerful. RRDTool also supports labels, averages, can generate text most anywhere you want. The feature list goes on and on.

So, what can I do with this information? Lets find out. I have an idea! Wouldn’t it be great if I could show to the CSR if the radio ever dropped its session with the access point in a way that lined up with some signal events? Canopy SMs report their session uptime in seconds, if they drop association, it returns to zero. With this in mind, we now start polling the same customer for a third data item, its SNMP reported session uptime. If the value is ever less than the polling interval, we draw a TIC mark at the top of the graph.

But wait! What if the customer rebooted their CPE? That would still show up as a session restart, can we show if that happens? We sure can, The CPEs also report their device uptime in seconds. If it ever goes backwards, we can safely assume the device restarted for some reason. So, if the radio drops too much signal and losses connection, we will have an indicator, and if that loss of session is caused by the device losing power, we will have another indicator. This simple idea, plus a whole host of additional data looks like this.

I am sorry, I do not have an example of a CPE losing power, it would show up as a purple TIC mark at the bottom of the graph.

Wow! Look at all that information! This customer’s service isn’t just losing some power, it is dropping off the network completely! Every single one of those red marks at the top indicates that the radio had a session uptime of less than 600 seconds, or five minutes(our polling interval). I now know that the events on the 24th were also service affecting, but not those on the 25th. They looked about the same. but now we can see plain as day evidence to the contrary. I also have some nice averages, min and max values, as well as the most recent. The graph itself is now completely portable. With MRTG, I had to surround the image with text showing the device name, image creation time, and a legend. This is all now contained in the image, this is something that can saved elsewhere, a ticketing system, or even sent to the client, with an explanation about how we plan to resolve the issue.

This pretty much proved my point that there was a better way to view this data, and we migrated off of MRTG completely once we had a few months of trend data collected. We eventually built a massive ZFS file server to get this data to disk as fast as we could collect it. I wrote a custom SNMP poller that could query over 10,000 devices for nine or more data points, in well under two minutes. Honestly, that part was super easy, getting all that data to disk took some doing, as this was all spinning rust. It was more ~20 seconds to collect (threaded code is awesome, yo), and another 70 to finish committing it to disk.

In the months that followed, Cambium released a new kind of OFDM based Subscriber Module that was dual polarity. Sadly, I do not have an example of those graphs, but we went from graphing just RSSI and SNR to graphing each value for both vertical and horizontal polarities. To keep the graphs easy to see, we printed the absolute values for the new data sources at the bottom like we had before. But for the visual side of things, we made the existing lines a bit thicker, and represented the new polarity as thin lines over top of them in a color that stood out well. In a perfect deployment, they lined up very well. In a less than stellar installation, it was very easy to see when a single polarity was doing something wonky.

Bonus Image!

Early Canopy 450 SM firmware did not expose the Horizontal pole RSSI via SNMP, just the SNR, but you get the idea.

So, I looked around, and found another image for you that I just love. Same concept as before: Choppy service, even choppier than just the gaps in RSSI would indicate. I mean, just look at all that red! What makes this one interesting, I remember it quite well (it was a friend’s house). The drastic change where both RSSI and SNR got worse, but stability improved drastically illustrates perfectly my point, more information is better.

Stay tuned for Part II of this post, where I will show off traffic analysis.

Junos: Quick and Dirty Starter

Note: This is not done, nor is it spell/grammar checked, publishing anyways to give my friend some starting material.

Thanks to the kindness of a former coworker of mine, my buddy daemoneye has been graced with some decommissioned Juniper gear. This is a guide to get him started, if you are not him, but benefit from it, great.

I’ll take this moment to state that the new WordPress UI is awful.

Modes of Operation

Junos has two, well maybe three, modes of operation, command, config, and UNIX shell. If you log in as any non-root user, you will be dropped into command mode. Logging in as root drops you directly into the UNIX shell. To gain access to the configuration mode, you type, configure, or edit, either works. Typing exit from configuration mode will return you to command mode, unless you have navigated down into the config hierarchy with the edit command, more on that in a bit. To enter command mode from a UNIX shell, use the cli command. Here are the basic transitions.

Now is as good a time as any to say that you can run any command mode operation from configuration mode by prefacing it with run.

Configuration Hierarchy

While rummaging around in the configuration, you may get tired of typing the full set statements all the time, you can zero in on a particular section of the config with the edit command. This is very much like using cd to navigate a filesystem. Lets create a VLAN trunk port with a few VLANs assigned to it. First, lets create VLAN 5.

Yai, that was an awful lot to type. Adding an additional VLAN will be nearly as long, we can exclude the first description, but it is still pretty heinous.

Lets shorten that a bit for the third VLAN.

When you know you’ll be issuing multiple commands in the same basic area, you can save yourself a bit of typing with this little feature. To escape back to the top of the hierarchy, simply type exit. If you issued multiple edit commands, you will need to use exit for each one.

Interfaces and Families

Junos, like most routers, separates physical and logical interfaces, that is what all this unit stuff is about. fe-0/0/3 is the physical interface, unit 5 is a logical interface, assigned to vlan 5 in our above example. Unit numbers do not have to match vlan ids, but it will drive you insane if you don’t do so. Unit 0 is the only acceptable unit for a port in L2 mode, and is also the only allowed unit for a port that has no vlan tagging defined.

Underneath units is where you define a family, there are many options, most of which are beyond the scope of this document. The two that matter are inet, and ethernet-switching. Okay fine, for all the IPv6 fanatics, I will also briefly mention inet6. On some platforms, some physical interfaces can support logical interfaces in multiple modes. The MX series can, for example, have logical interfaces on the same port in both L3 and L2 modes, but don’t worry about that for now.

To put a port in Layer 3 mode, define at least one logical unit with family inet, just like our example above. To put a port into L2 mode, set unit 0 to family ethernet-switching and define VLAN membership, and behavior.

Note that in later versions of JunOS, port-mode has been changed to interface-mode.

A single port in access mode for one VLAN isn’t all that useful, you will want to define multiple port members, or possibly add some layer 3 services to a VLAN. To make a JunOS device respond on L3 on a VLAN, you must define a virtual vlan interface (later versions of JunOS have changed this to irb), and associate it with a VLAN.

Now is a good time to point out that that vlan.15 is basically shorthand for vlan unit 15, you can use it in set statements interchangeably, and anything in the config that has to reference a logical interface will use this shorthand.

More To Come

Tired of typing, gonna play Skyrim for a while. Final version will get twittered.

That MSTP From Hell Story

Yet another story I tell all the time at hackercons, now in full textual glory.

As many of you know, I spent a good chunk of time working for a WISP. That is Wireless Internet Service Provider for you non technical civilians. I used to refer to this org either by name, or as $weMicrowavePackets. Get it? We used microwaves to send packet data? It’s funny, it’s a funny joke, laugh. Anyways, one of the many, many, many reasons I left was my sheer disgust at just how bad our service had gotten in my final years. We were so completely oversubscribed. The customers complained to the customer service group constantly, who in turn came to me, the senior network engineer, for a solution. There was none, we were oversubscribed, simple fact. We were oversubscribed, and I did not have the authority nor the budget to do anything about it. After I left, the name $weMicrowavePackets died, replaced by the far less flattering moniker $weDropPackets. Believe you me, in the last years I was there, we did more than our fair share of it.

The wireless network was essentially divided into two pieces: East and West. They were connected by way of a single 1.7 gbit wireless link, and later, a ring of mostly 10 gbit fiber was added. Sadly, one expensive as hell 1 gbit loop meant the East side of the network only had a theoretical max of 2.7 gbit bandwidth to the outside world. We had two 10 gbit upstream circuits, as well as 10 gbit to a CDN aggregation network that included Netflix, but none  of that is where we had bottlenecks.

This meant roughly 50% of our customers had access to roughly 12.5% of our available bandwidth. In short, the entire eastern half of the network was oversubscribed right at its headend. This had the effect of making it far easier to deal with, despite generating just as many customer complaints as its western sister net. None of the links leading up to the headend, save the penultimate hops, were generally over saturated. All the traffic had one way out, and every path to that egress sucked because the egress itself sucked. So, at least it sucked consistently. This actually made my life significantly easier.

Contrast this with wireless zone west, which had 10 gbit fiber to 20 gbit of available uplink. It should go without saying at this point that price, not utility, determined everything.

Balancing L3 traffic between these two zones and our upstream providers was also a lot of fun, we had a /19 and a /22 of public address space. A lot of that was announced as /23s in various states of ASN prepend, and was changed on a fairly regular basis. That madness is another entire rant unto itself.

Both sides of the network had evolved organically out of a significantly smaller network of the same core design. The earliest stages worked reasonably well, but they did not scale. In the earlier era, east and west were actually one large mesh. As we added new tower sites, the scale issue came to a head. I had advocated a switch from MSTP (multi spanning tree protocol) to MPLS/VPLS very early on, but was consistently shot down.

Side note: Apparently, to this day, MPLS is considered some kind of Juniper/Cisco propaganda at $weDropPackets. They believe it doesn’t actually scale. I have since worked for networks 10 times the size of $weDropPackets that were near 100% MPLS, trust me, it scales just fine. You can thank a disparaging 2001ish NANOG comment by Randy Busch for the owner’s attitude toward MPLS. It isn’t like technologies mature over time.

So the network lived on in two pieces, as a near unmanageable MSTP mess. A real network engineer will tell you that a backup link has to be able to absorb all the traffic from the primary, or it isn’t a very good backup. A owner/manager concerned with bottom line will insist that all potential links should run balls to the wall, because, “stuff rarely breaks and traffic is money!” Unfortunately, near as I can tell, MSTP was created for managers with this exact mindset. If you are so oversubscribed that you need to split up your layer two paths, damn good possibility you have a massive problem.

It wasn’t my design, it wasn’t what I wanted, but it was my problem. I had to keep this thing running. My name was Sisyphus and this  was my boulder. I was, on paper, the senior network administrator. In reality, I was that so long as my boss hadn’t had a bad day. If he was in a foul mood, all bets were off. He became a micromanaging asshat who would make undocumented changes, ignore change controls, and say “Fuck” an awful lot. Then the undocumented mess he had created would piss him off because our documentation sucked, and the cycle would begin again. Keeping him in a less than bad mood actually became a network stability issue toward the end. Not a pretty picture is it? This was my job, seven days a week. The Internet never sleeps.

So, the western wifi zone. Sixteen major tower sites, and a dozen or so minor ones, and one, singular, way, out. Major sites had links to more than just one other tower site. Minor towers were stubs, they had one uplink to a major site. The end result was a “core” network of 20 or so wireless links, between 16 sites, and all traffic going toward, and coming from, a single headend tower.

This was all managed via MSTP, it was all layer 2, strictly Ethernet.

Let that sink in. Link speeds between 700 mbit, and 35 mbit. Sixteen sites. Twenty something major connections between them. All MSTP. No end in sight, and the owner thought this was just fine. My job was to keep it alive. My solution was horrifying, but I stand by it as the only reasonable course of action left to me.

From memory, I have pieced together what I can remember this network looked like, I cannot seem to recall two of the towers, but this is pretty accurate from the 14 I can recall:

I’m missing a few towers and links, but that should illustrate the horror that was the western network. And yes, LU to LE had two links. Many of these links are actually dual wireless in a link aggregation, but this particular one was of two wildly different speeds. That doesn’t aggregate well at all, so they were handled as separate entities.

Also, this is not to any scale, these links existed mostly as geography dictated. Graphviz just built its own layout.

We had already deployed MSTP in the form of two instances, plus the common. When we split the network in half, that became four instances, plus two commons. That worked for a while.

Then, the over-subscription got really bad.

The western wifi zone effectively had, at least at the headend, 10gbit of bandwidth for apx 50% of the customer load. But the links leading into the headend, numerous that they were, were in total, no more than four gigabit in total capacity. To add to the hell, none of the links were of the same capacity, nor did we have any way of dealing with overloaded links in an automated way. Remember when I said I had advocated MPLS? Now I was outright begging for it. I even built the core network around it, at least in name. Switch management networks (layer 3 switches whenever I won that argument) were all OSPF connected, and named things like, “WifiWest-mpls-105. WifiWest was the network label, mpls was the predominant purpose of the network, and 105 was the VLAN tag. I was building the capability to switch to MPLS in multiple stages. Network tests indicated we could deliver (in lieu of capacity issues), line rate L2 services between wireless east and west. MPLS was never deployed, we instead deployed a “carrier grade Ethernet” solution that was awful on many, many levels.

We also had a pile of aging Cisco L2 switches, some of which did not speak industry standard MSTP. They spoke Cisco’s pre standard to the spec only. We had become, under my direction, a Juniper shop. I will not apologize, I love Juniper EX switches. But this meant that we had to often create a buffer between tower sites with Juniper gear and tower sites with legacy Cisco switches.

The Juniper EX3200 was the standard, if I got my way switch. Later replaced by the EX3300. The legacy switch was the Cisco 2955.  The 2955 only speaks Cisco’s pre standard MSTP implementation. The Juniper switches only spoke the industry standard. So, migrating between the two required the introduction of an intermediary. We chose the Cisco 2960s series. The 2960s will speak both, they will exchange BPDUs with “pre-standard” 2955s, as well as standard compliant, modern switches. This meant we literally had to place a 2960 in between any Juniper, and any legacy Cisco deployment. A 2955 adjacent to a Juniper switch will not communicate MSTP frames…… at best it will be reasonably RSTP sane.

This would have been fine, except……. we kept bringing 2955s back into service. As I told one vendor when they inquired about our switch life cycle….. “we run a switch until the smoke comes out, then we go out with a soldering iron, and put the smoke back in, and we run it some more!” The 2955 series hit end of life in early 2013, we were still deploying them in May 2016….. the month I left.

Reason number god only knows I left this job…… I regularly had to point out that we could not safely deploy a spare 2955 at new site X, because it would literally break the network.” I was always told I was being difficult or accused of wanting more shiny new toys.

The culmination of hell was actually managing individual links of the western core. Two MSTP instances had become four, two east, and two west. After several colossal failures due to oversubscribed links, the two western instances became nine. Yes, nine MSTP instances.

The common instance was relegated to management traffic only at first, until reaching congestion impacted switches via SSH became an issue, and management was broken up into the multiple MSTP instances. Eight of the nine customer zones were similar, they all had the same goal: Get traffic to the only way out, the headend of wifi west. The final instance was actually an emergency services related VLAN, and fortunately, had a different ultimate destination than the rest of the customer traffic. Unfortunately, they also carried an urgency, often over antiquated equipment that had no real ability to prioritize L2 data, certainly not emergency services’ traffic. But we did consider them a higher priority. To make matters worse, the emergency services were using old analog radios, and forcing them to work over IP in a way that the manufacturer considered a very bad idea. It broke all the time, and we were always blamed.

At this point, I was literally adjusting metrics on a daily basis……. live. For those of you not in the network world, spanning tree network metrics are purely layer two. This means useful tools like traceroute do not exist. Traceroute exists at layer three, a layer which is, quite frankly, not terribly difficult to diagnose. Layer two issues by contrast, are very hard to visualize, and the tools to assist you are far, far less common.

MSTP config changes are not easy, and they come in two flavors. The first is VLAN membership to instances. Changing VLAN memberships is insanely difficult, as you have to change every switch in the same MSTP domain. So, either make it all happen at once, or carefully plan how you will break the network and then put it back together in a new form. Even if you have kickass automation, this almost universally requires a 3am maintenance window.

The second is per link metric changes, one can change link costs on a single switch without technically causing a meltdown. Changing a single metric on a single instance on a single link at one end of the network could potentially send a 500+ mbit of traffic thundering in a very new direction. Any sane IT admin would still  relegate this to a 3am maintenance window. I did not have the luxury of sanity. I started my prep at 3pm everyday, and took changes live before 4:30pm.

In my last year at $weDropPackets, adjusting metrics became a daily admin task, my task. I was overworked, nothing was ever good enough, and despite my protests, the network alternated between, “just fine,” and “the way its gotta be.” We were great, we were the best WISP ever, we could do no wrong.

The final numbers speak for themselves.

  • 1000+ square miles
  • 16 major switches
  • 20+ minor switches
  • 9 MSTP instances(and the common zone)
  • one way out for 99.95% of customer traffic.

In the end, that one way out is what killed the network on a regular basis. All traffic had to reach the headend by any means necessary. A failure to balance the daily traffic on all links would result in dropped BPDU frames somewhere. The ensuing flood of traffic over a new switch path would cause more dropped BPDU frames, and a cascade failure was the end result. Network stability became a function of my ability to think on my feet, and predict the future. Starting at 3:00pm everyday, seven days a week, I loaded core network graphs, looked for the overloaded sections, and adjusted MSTP metrics…… LIVE. It was basically a question of what sections of the network were consuming the most Netflix after school that day. It changed all the time.

I did this for over a year, and I became extremely good at it. I had to be. Because when I failed to balance the daily traffic, the cascade failure happened. It would take over 50% of our paying customers offline for an hour or more, and guaranteed I would get my ass chewed out at the next staff meeting. I proposed many solutions, and yes, they all cost money, and yes, they were all rejected. We had no money, and when we did, the network rarely saw it. Yes, our primary product was our network, and it was held together with duct tape, blood, and tears.

I am ashamed to admit, those last points were my driving force for the last year I was at $weDropPackets. Everything I did was predicated on that low level of human need. I wanted to not get yelled at, that is literally all I wanted anymore. I had recently developed Meniere’s disorder, and stress was consistently triggering vertigo spells. I would wake up Monday morning, knowing I faced yet another scream out session before the senior staff, and immediately go into vertigo. In my final year, I burned through all my sick time, and nearly half my PTO. When I finally turned in my 30 days notice, friends started noticing a difference immediately. The words that guaranteed, more than anything else, that I would not reconsider leaving were from my wife. “You have been calmer, happier, and more affectionate in the last two weeks than you have been in the last two years, I feel like I got my husband back.”

No job is worth your health and happiness, not even for a funny horror story like this one.

Authentication Factors For The Non-Technical

I have explained multi-factor authentication(MFA) to several people now. Most of them from my non-technical friends pile. Nearly all of them have requested additional information. I am either very convincing, or the world is indeed on fire. As such, I have decided to write this blog post. Hopefully someone will find it useful.

First things first, a disclaimer. I am not a security expert. I consider myself a security adept, an aficionado, or perhaps just a systems and network junkie who happens to care about security. There are much more in depth articles on this very subject, most of them fairly technical. My goal is to keep this post for the layperson. I would not mind if a few of my more security as a day job friends gave me their two cents on the statements I am about to make.

Here we go.

The first question a lay human may ask is, why does authentication even exist? What authentication boils down to is quite simple: it exists so that a computer and/or network resource can determine who you are. Based on that single fact, it can then perform the later stages of the entire login process. Namely, authorization: what resources are mine, and what actions am I allowed to perform? And accounting: What did I do while I was logged in? These three things are often collectively referred to as AAA.  To ask what authentication is may sound like a silly question to many, but I have several clients who have worked very hard to eliminate authentication layers. Usually because they only see it as a burden. They do not see, or have not been shown, the advantages that proper authentication, when combined with authorization and accounting, can bring. Security be damned, proper AAA can actually make your enterprise work environment flow better. Without going too far off topic, done right, AAA can ensure that the right users can get to the right resources; meaning they can do their jobs effectively.

Dear Infosec peeps, I know, I am really broad stroking it here, as well as taking some liberties with some terminology. My goal is to explain the concepts and advantages, not the technical details.

What does this mean? When a computer or network asset knows who is requesting resources, it can deliver things like proper files, correct bookmarks for your web browser, grant access privileges for protected resources, your favorite songs, and even display your favorite background image. Everything about that computer service that relates to you, is tied to your digital identity, and the computer learns that identity via some authentication process. Some of this information is very innocent, like a background image. But a great deal of it is, to some degree, sensitive. A supervisor planning to fire Joe in accounting doesn’t want Joe to be able to read his emails. Authentication exists so that Joe can only prove to the computer that he is Joe, and not whomever he wishes to get some dirt on this week. Joe is a dick, and deserves to be fired. No, not you my actual friend Joe.

This of course leads to the question, how does a computer know who you are? The short answer is, it really doesn’t. It instead asks you to identify yourself, and then prove it. This is called a challenge response. Declaring your identity to a computer is fairly simple, the far most common method is to present a username. The username is your identity, not part of the proof of that identity. A username should never be considered a secret, I am nuintari nearly everywhere I go, with very few exceptions. This is hardly classified knowledge, and if it was, it would be a shitty example at that. Plenty of places auto assign numbers as account identifiers, but account identifiers are frequently not easily changed. If I needed nuintari to remain a secret, I am pretty sure I am fucked by this point.

In response to your proposed identity to a computing resource, there are three challenges that the computer can present for you to prove you are indeed who you say you are.

  1. Show me something you know (That I also know).
  2. Show me something you have (That I know you have).
  3. Show me something you are (That I know about you).

This is often stated in security circles as, “something you know, something you have, and something you are.” These are considered the three vectors of proper authentication. Each one has advantages and disadvantages. For sake of flow in this article, I am going to address them in reverse order.

Proving something that you are, is by far the hardest type to scale at a network, or internet scale. Something you are is often also called biometrics. Something about your physical self that uniquely identifies you as you. This is things like finger/hand prints, eye scans, voice stress analysis, and urine samples. Some of those were made up…… The reason these are hard is because they have to be distilled into something that can be potentially sent over a network, understood by a computer, securely in its own fashion, and be properly interpreted in such a way that a correct result is returned. All while maintaining that simply sending the correct digital message is not sufficient. Ergo, you have to prove you scanned your thumbprint, not just send a faked out image of your thumbprint. This exists, sort of, it tends to be expensive. It also is the least well supported. I highly doubt Facebook will ever accept semen sample authentication. Wait…… Facebook would totally do that…… You heard it here first, folks! Suffice to say, proving something you are is far more common in closed systems. Where all end points are controlled by a single entity. An example would be a place that requires hand print access to open certain doors.

A significant point to make about biometrics, that is often overlooked, is that they can never change. This may be considered a strength, or a weakness. When was the last time you changed your fingerprints? Again, this is both a strength and a weakness, and has to be evaluated when designing a security system. Broken record time: beyond the scope of this article.

Something you have is a little easier, and wonderfully, becoming far more common! This is tying a device to your account, and proving that it is in your possession at the time of the authentication request. The most common way most end users see this is the all so common RSA tokens that provide a one time password in the form of a short numeric code. Newer methods involve Yubikeys, or FIDO compatible devices.

left: Yubico Neo
right: Feitian Multipass FIDO

Both devices present a response to the computer resource that you are who you say you, because you had previously agreed to tie that specific device to your account, and present it on demand for authentication. Yubi and FIDO work differently, but both resolve this challenge reasonably well. I have one of each for a variety of reasons, all beyond the scope of this article(I also have a backup in a fireproof safe, again, beyond article scope). I would be happy to do a rundown later, should someone want to read my ravings on that particular subject.

The most obvious reason to not like this is simple, you have to have a thing you carry around with you. We all carry wallets, key chains, and purses anyways, so get over yourself. Hardware tokens also come in a variety of qualities, and some can be cloned. Remember college? Those key fobs to get into the dorms after such and such an hour? Show of hands, how many of us had a fob that accessed _everything_? Oh, right, lack of a live studio audience….. trust me folks, it is a huge number of people. Basically, buyer beware, you probably get what you pay for.

Another form of proving something you have, is to use your cell phone, or other mobile device. The best example of this is the Google Authenticator App. Like the physical keys, you tie your mobile device to the asset you wish to connect to, and will be presented with a periodically changing code to enter when authenticating. Of the three, this is by far the easiest and cheapest to get started with, but also the least secure. Cell phones get stolen, and are far, far easier to match to an account than a random key chain thing. Also, you will frequently access computer resources, from the very device that is proving who you are to that very resource. Mind you, this is still far better than just passwords……

The far most common is number 1, something you know. I am talking about passwords. The exchange is very simple:

Computer: Who are you?
Me: nuintari
Computer: Yeah? prove it with a password:
Me: myUncleBlowsGoats
Computer: Okay, I accept that you are nuintari, have fun!

The biggest problem with passwords is very plain to see, you all now know my super secret password. Passwords are just text, and can be shared. They can be sent via chat messages, accidentally typed into the wrong (possibly malicious) website. They are easily leaked, easily shared, and, if bad enough, easily broken given enough time. Some websites store passwords in plain text form, or in a very broken hashing algorithm. This means if someone pops their user database, they get your password too. This is why you should never re-use passwords, because the first thing a bad guy is going to do is try that password they lifted on other services you may use. Not only do passwords suck, but we suck at making them up. I’ll spare you the details, XKCD said it best.

So, what I am, the lay person to do?

The single best way to prove your digital identity to a computer asset, and prevent anyone else from doing so, is to insist on at least two of the above methods. This is called Two Factor Authentication, or 2FA. The most common method is to take authentication factors 1 and 2, and combine them. The authentication conversation then becomes this:

Computer: Who are you?
Me: nuintari
Computer: Yeah? prove it with a password:
Me: myUncleBlowsGoats
Computer: Yeah, that is disgusting, show me your key.
Me: insert key, push button…..
*crazy crypto computer stuff happens*
Computer: Okay, I accept that you are nuintari, have fun!

This may not seem like much, but this is often the difference between a compromised account and nothing. A leaked password is still useless when presented with a second factor, and the end user can be informed that their password has been guessed correctly, many times, but the physical hardware device has never been presented. The end user can change passwords, with the sound mind that they dodged a bullet.

The flip side is also true, stealing someone’s hardware token is useless without also getting their password. Strong passwords are still very important in this scenario, because a lost hardware token can be revoked and replaced, so long as the password was not also lost, or just painfully obvious. This is also where it helps to have a backup token, stuffed inside a fireproof safe somewhere.

*ahem* Regarding passwords, street address, lower case, no spaces….. NOT A GOOD PASSWORD. Nor are phone numbers.
/me glares at clients $wePublish, $wePublishToo, and $weLawyerStuff…..

Using biometrics is also an option, but again, is the hardest to leverage at network scale. Using all three is also an option, making it three factor authentication, or 3FA. I have seen this done well before, it was impressive. As I stated earlier, biometrics work better in more closed systems, where the exchange of data can be more tightly controlled and trusted. This is hard to do on the public internet.

Some of you may be thinking to yourself that your bank does 2FA already. Sadly, you are likely mistaken, banks have some of the worst authentication systems for what should be extremely well protected assets. The most obvious is when they send you a code to your cell phone via a text message. This may seem like it is resolving the 2nd proof, but what it is actually doing is proving the 1st proof….. twice. You present a username, a password, and they send a code to your cell phone, which you also present. This used to be considered a valid form of the 2nd proof, it is no longer considered sufficient. What this is is now know as 2 Stage Authentication, of 2SA. You are still simply proving something you know, it just so happens that you only recently learned one of those two things. You haven’t actually tied the device to the account, you have tied something the device can learn to the account. This may seem like splitting hairs, but sadly, phone systems can and are compromised. Intercepting that code can be accomplished by a variety of means. In short, you don’t need the device that gets the code to prove identity, you just need the code, and the code can be separated from the device.

This may lead you to cry, but what about the Google Authenticator App? It does the same thing! Actually no, when you set a new account within that app, a secret is constructed that identifies not only the account which will need the code, but also what device the code is coming from. That secret is used to generate the periodically changing one time passcode on display. In short, if you have everything in Google Auth, and get a new phone without backing up and transporting those secrets, you are going to have a bad time. The difference between the Google Auth App and simple SMS based verification is that the Google App does indeed prove beyond a somewhat reasonable doubt that the correct device was used to obtain the one time passcode. The code can be separated from the device, but the task of doing so is significantly more difficult to accomplish.

This is not to say that SMS/2SA based verification is not worth doing. If it is the only option presented beyond passwords, you should very much take advantage of it. But ideally, the Google Authenticator app should be considered the base, bare minimum, and a physical hardware token, the gold standard.

In the end, I would strongly recommend everyone seriously look into 2FA. Google’s new Advanced Protection Program costs you the hardware, and a bit of time to adapt your habits. The devices you utilize for Google’s protection can be used for other services as well. Buy two hardware keys, authorize both of them with as many services as you can, then toss one into a fireproof safe. Congratulations, you do this much, and you are profoundly more secure than the average Tom, Dick, and Harry. You will find very quickly, that authenticating with the hardware device is very simple, and non-intrusive to your daily life. You’ve added five seconds to a task, and in exchange, reduced your potential ulcer count by a shitload.

Now, stop reusing passwords, and start using a secure password manager…… That is another article entirely.

Junos Groups Part I: Basics

On my many IT adventures, I see issues, one of the biggest, is lack of network consistency. Network ports configured one way, others configured another, VLANs trunked to parts unknown, none alike, even if they share the same basic role.

Juniper Junos has a wonderful tool, that seems incredibly underused, that largely resolves this: groups. Groups are awesome, they enforce consistency, they reduce typing, and they make configs shorter, and easier to read.  Here is a quick example of how to use them to manage VLANs on a switch.

Groups are basically templates of configuration settings that can be layered on top of any section of the Junos configuration with the apply-groups statement. There is some globbing support, allowing you to fine grain control when groups are applied. For more on that, check this Juniper article.

Groups essentially follow any subsection of the Junos configuration stanza, with matching patterns in place of variable data, such as interface names, ASNs, OSPF areas, etc. Here are two groups that apply Ethernet settings, namely member VLANs, and port mode, to an any interface they are applied to.


In order to apply these groups, we can apply them directly to a few interfaces with the apply-groups statement:


In order to see and verify the applied group settings, we pipe the show configuration command to display inheritance:


Using groups, one can greatly simplify the configuration of a Junos device, while at the same time enforcing consistency. Groups are not limited to interfaces, and can be applied to virtually any section of the Junos configuration.  In the next part of this series, I will display some more complex examples. Please check back soon!

The Lawnmower/Stereo/Bikini/Chicken/Sig Sauer/Beer Incident

I have told this story many times over the years, but never actually tried to write it down.

Framing Elements

A long time ago, before portable compute devices with more horsepower than anyone ever needed were commonplace, and long before the first Bluetooth speakers, there was a man with a dream. Or maybe it was a woman, I suppose you really need to figure out what I was wearing the day I came up with this idea, and what you think of gender identity issues, and whatever, it was me, okay? I had the dream. It wasn’t a grand dream, it will never change the world, but it was fun, and it was mine!

The dream was a riding lawnmower with a kick ass stereo system, that had wireless, and streamed MP3s from my home file server, and operated completely hands free.

The hands free part was either a design goal, or just me admitting that I had no desire to get X working on a portable LCD screen, much less pay for such a beast in 2003, back when computer hardware was far less disposable, and rarely cheap. At this point, I think I still had 486s in service at home, and at $dayJob. In fact I know I did, the original was an AMD 486 piece of shit from hell.

The lawn mower was, and is to this day, a Toro Z4200 Timecutter. A zero turn radius, 42 inch deck, gas guzzling monster, that cuts through my 1 acre property in less than an hour……. I call her Rachel.

The Stereo

So, with the design goals in mind, prepare to be amazed with my oh so awesome solution. The stereo itself was a pair of cheap speakers, requisitioned from my first dual cassette tape deck, circa 1989. The compute workhorse was a Soekris Engineering Net4501, with a MiniPCI 802.11b wireless card, and a 3.3v PCI sound card.

For those that remember such hell, the Soekris boards were notorious for not actually providing 3.3v on their single PCI slot, they were also notoriously interupt craaaazy. Finding a sound card that actually didn’t wig out at being under-powered…. and still worked under OpenBSD, took some doing. Yes folks, I was over Linux even back in those days. Linux sucks. I wish I could even remember the make and model, because I went through hell, as did my wallet to find such a beast. Trust me, it exists, it was not easy to find.

A little electrical glue provided the rest, a dc-to-dc step down converter got me the power I needed from the mower battery. My childhood as the son of a journeymen electrician has been good for a few things in my life. Operation was simple, a simple quick release wire snap provided the connectivity to the battery, it was technically possible to run the stereo without the mower running, but why would I ever do that? The modus operandi was to connect the power, go inside, grab a beer, come out, fire up Rachel.

Meanwhile, OpenBSD booted, hopped on the household wireless, mounted $fileServer0:/home/nuintari/media/tunes via NFS, read only, of course, and grabbed a playlist. From there it was just mpg123 (or was it mpg321? I forget).  Tunes soon started flying out the cheap cassette deck speakers, and yours truly would proceed to enjoy a relaxing hour or so of yard work and beer.

Rock and Roll!

Pre, The Incident

My wife is afraid of birds, royally terrified of birds. Have you ever seen how I react to spiders? Imagine that, but with birds, it is that level of terror. Actually it isn’t, my wife isn’t the bloomin’ coward I am in the face of her fears. But, she is not a fan of them to say the least.

We live in the country, or….. maybe right on the edge of the country. As country as Northwest Ohio ever gets is the point. Country enough that the neighbors raise chickens. Chickens that are mostly free to wander, and return to the hen house at the end of the day. How they weren’t all eaten by foxes, I will never know. But, they did seem to have a thing for my lilac bushes. They would wander across the street, and nest in my lilacs. My wife hated this, she’d be out in the yard, and a chicken would appear out of nowhere, and my young, young, gorgeous lady would lose her shit and run inside. I would of course, be dispatched, usually with some kind of makeshift polearm, to shoo them away.

Occasionally, I would notice them while mowing the lawn. Rachel has some oomph behind her, and if you kill the blades, and pull the deck all the way up, you can move at a solid 12+ MPH….. with the wind. Fast enough to chase chickens. Not fast enough to catch them, not that I ever wanted to, but fast enough, and loud enough, to chase them away. Also, good for a solid laugh.

We were the new couple in the neighborhood, and the farm across the street was our only real neighbor. Turns out, they had a daughter graduating high school. We were invited to the party, which we wholeheartedly accepted on the assumption that there was likely to be beer. And, I guess we should get to know the neighbors or something.

Over the course of a fine afternoon, the father approached me, and informed me that, “I see you chasing my chickens, they give you any trouble, just shoot em, they’re good eating!”

I should point out that I live NORTH of US-6….. which anyone from Ohio will recognize as the actual Mason Dixon line of demarcation between civilization and Hicksville, USA. Someone will hate this bit, but I don’t care. South of Six Hicks are a thing, and we were a solid 40 miles North of their territory, spooky.

I should note that this phenomenon exists only in Ohio. Once you reach Kentucky, the hick meter resets back to a sane level, people are way nicer, and supremely less racist. South Ohio sucks ass.

Now, I have zero interest in shooting a chicken. For starters, I own a few guns, none of them suitable for avians. Can you imagine actually hitting a chicken with a 12 gauge? Or a 7.62 SKS? It’d be feathers and a fine mist. But, even assuming I killed it, and left it intact, who wants to clean it? My old man took me hunting a few times, cleaning the carcass is the nasty part I never want to experience again.

The Incident

This part is actually pretty short, the lead up is what makes the story funny.

The stage is set, Nuintari, the man with a dream, is riding a hacked up stereo laden lawn mower, listening to classic thunder, and of course, I have a beer , and I am wearing daisy duke shorts, and a bikini top. It is either truly awesome, or truly awful to live next door to me, even if the houses are fairly far apart.

A chicken waddles over the street, through my side yard, and right into my lilac bush.

It should be noted that at this time in my life, I had come into possession of two key items relevant to this story. A Sig Sauer, P229 9mm handgun, and a pile of 9mm blanks. Remember, I don’t actually want to kill the chicken, I just wanna fuck with it. Also, I am drinking.

I know, I know, I know, I shouldn’t mix beer and guns….. It hasn’t happened since…… that I can recall.

So, naturally, inside I go, grab the gun, a fresh beer (I know, I know), and load the weapon with blanks. Upon returning to Rachel, the stereo is now beginning to play Wagner’s Ride of The Valkyries. It was so on. Deck up, blades off, LETS GET THOSE CHICKENS!

The next few minutes is basically me, in a Bikini top, daisy duke shorts, driving a zero turn radius mower, with a beer in one hand, a blanks loaded 9mm handgun in the other, rocking out to classical German musical great Wagner, chasing a chicken around my yard, occasionally taking potshots at it with the blanks…… and of course, laughing like an idiot the entire time.

At one point, I caught a look from the farmer across the street, who was basically, as the kids say, “losing his shit.”

The Legacy

The stereo blew up. A victim of a replacement battery, and operator failure to observe reversed poles….. oops. It has since been replaced with a smart phone, a bluetooth headset, and Pandora. Not as sexy, but it works. The neighbor moved away, the chickens are all gone, the farm is largely empty these days, some days, I can chase a Killdeer around a bit, but it just isn’t the same. Killdeer fight back.

That Time an IT Emergency Made Me Sneeze Blood

Due to popular Twitter demand, you all apparently want to hear this tale. Warning, it really isn’t all that gruesome, but should probably serve as a cautionary fable for anyone who has decided to get into the magical world of consulting. I am also under an NDA, so the names have been changed to protect the grotesquely stupid, and I sadly, do not have any photos.

The Situation

This is a client I started working for about a year ago, mostly network stuff. They brought me in to reign in the insanity that is intrinsic to small <redacted> industry IT (hint, all IT sucks). One of the first things I did was whip out a label maker, and label patch cables everywhere I could find them, this saved my butt in putting this all back together later.

This particular small shop had a single rack for their IT assets, tucked into a back store room. This rack had many, many, many issues. I guess it is time for a bullet list.

  • 23 inch rack, nothing in the rack was wider than 19 inches. So, multiple 2 inch spacers on each side, top to bottom.
  • Two post rack, plenty of stuff that really screams four post. At least it was all at the bottom.
  • Cheap, flimsy construction, this thing would wobble even without the 2 inch spacers.
  • Filled to the brim, stuffed.
  • Bolted to the floor, a badly poured concrete slab that had clearly been laid down in winter. Stomping your feet made dust appear.
  • A stiff breeze caused this rack to wobble in all directions.

Between the shaky rack, and the shitty foundation, it doesn’t take an idiot to realize that the bolts holding this thing down were slowly wiggling themselves free. I told them a year ago, this is going to fall, and it is going to suck. They dismissed my warnings. Oh, I should have walked then.


They call in the AM. “We are completely down, our rack of servers fell over!”

“Yup, lemme grab my drill and my crimpers, I’ll be right in.” I replied.

Coffee to go would have been appropriate, but I had a cup of traditional at home first. My E-rate doesn’t start until I arrive, and I warned them, I fucking warned them.

Also, I knew what I would have to do.

Sure enough, the rack had ripped the bolts straight out from the floor, and collapsed. One Dell something or other is not in good shape, as it took the brunt of the fall. The rest looks like it might be alive.

I tasked one of their underlings with testing cables, anything that cleared gigE/voip on the the Paladin was re-usable, at least for now. I got to work on the rack itself. Fortunately, I only had to make six new cables by the end of this mess. No, underling didn’t know how to do that, and I am not a teacher when shit is hitting the fan right after shit has hit the fan.

So, four big ass 3/4 inch bolt holes in the floor, blasted out to all hell and back like incels think happens to lady bits if they dare have sex with someone not them. Yes, this rack really needed something bigger, 1 1/4 would be a solid minimum, but, I don’t carry concrete bolts in my Network/Systems/Security IT kit. Shit, I don’t have those in my house. But I do own a drill that can eat concrete. Thank you very much DeWalt for making a beast of a monster that I can afford. Also, my years in WISP land left me with a collection of masonry bits. LETS DO THIS.

Relocate the rack a few feet over, and mark out my holes. “NUMBER ONE, ENGAGE!”

This is where the shitty foundation starts to matter. In addition to not typically carrying concrete lags in my standard IT kit, I don’t normally bring a hazmat mask. This concrete slab had clearly been poured in the winter. For those not familiar with construction, masonry, or physics, water freezes, water is a critical component of concrete. When you lay concrete in sub zero temperatures, you get some bad shit, like a lot of dust, uneven level, and an overall shit pour.

I spent the next forty five minutes creating dust storms in my face, drilling out four holes in shit concrete with my barely adequate DeWalt Doomhammer.

I inhaled a small quantity of dust, it sucked. Then I had to make six replacement cables, and trace out shitloads of stuff that had come loose, and test. I was there for just short of three hours. Maybe 2 hours, 40 minutes. We got it done, they didn’t lose an entire day. I’m good, yo.

The Aftermath

I felt like shit, I had clearly inhaled a great deal of dust. But the next morning…… Dear god. Sneezing up blood, repeatedly. That was not fun. I still feel like ass, my nose and throat are clearly irritated beyond belief.

The client has already contested my bill. My emergency rate is always in hour increments, rounded up, no exceptions.  This particular client has a signed contract stating this, so I will get my money. But, that isn’t the point. The three hours of E-rate have no chance of addressing any possible health complications I might encounter because of this mess. Yet, here they are trying to claim they only owe me for two and a half hours, not a solid three. Now, I find myself looking for a legal way to make them responsible for the hell that is my lungs right now.

The Moral

And the moral of this motherfuckah is, ladies make em……. no wait, that is Prince.

Don’t let a company fuck with your health, they will happily do so to get what they want. I am currently updating my contracts to include personal health and danger clauses.

Organizations will not look out for you, you have to make sure you are looking out for yourself. Do not make your health a lower priority than your dedication. It isn’t worth it.

The Iconic USMC Moment

Today is a significant day in history, an iconic day for the United States Marine Corps; the day the Marines took Mount Suribachi, and performed the  now famous raising of the flag. Now, I know almost everyone at this point knows that the event was at least partially staged, but that is not the point. A lot of Marines died taking that mountain. By this time in 1945, support for the war back home was tenuous at best. A great photo, propaganda that it may be, was what the home front needed to revitalize support for continued warfare. Furthermore, a metric shitload of good US Marines died to make that staged photo happen. Today, it is emblematic of the Corps, one cannot imagine rough and tough Marines without eventually seeing this image in your mind. But I am not going to debate the merits of wartime propaganda, I was hoping to instill a bit of my historical knowledge on this subject.

Mount Suribachi sits at the southwestern most corner of the island, at a point known as Point Tobiishi. Elements from the 3rd, 4th, and 5th Marine Divisions were landed at two beaches, on the southern, and western edges of the island. Being a prominent high point on the island, Japanese positions had full view of both beaches, and the vast majority of the island. Marines were under artillery, mortar, and machine gun fire before they even hit the beaches, yet they pressed on.

Mount Suribachi is a honeycomb of caves, and the defenders took excellent advantage of this. Despite extended aerial bombardment by the US Army Air Corps, non-stop naval bombardment from the US 5th fleet, and close air support from Navy and Marine pilots, the enemy resisted, and held the peak for five days. All the time, devil dogs on the ground fought for every inch of land, under constant enemy fire.

Anyone who has ever seen an Iowa class battleship, or a B-24 Liberator, would have a hard time imaging how anything could survive the sheer onslaught of destructive force these weapons of war could bring to bear. Yet the Japanese defenders did exactly this, and continued to effectively wage war. Tobiishi point wasn’t won with air power, it wasn’t won with artillery and naval gunfire support. It was won with tenacity. Marines, in the blood soaked volcanic ash, with Garands and grenades, fought for that key position. They did the job, they fought for their buddies, they fought for each other, and in the end, they reigned supreme.

The battle for Iwo Jima would rage on for another month, with US Marines engaging a well prepared, well entrenched, and very desperate enemy. The securing of Mount Suribachi meant that, in this hellish landscape, Marines fighting to secure the rest of the island had one less place where death could reign down upon them. We will never know how many lives were ultimately saved by the taking of that tiny piece of land. Staged as it may have been, the photo now immortalized in Arlington is a true reminder of the values of the Marine Corps. They fought, they fought for their country, they fought for each other, and they got the job done.

In the aftermath of the battle, a US Army Air Corps base was established so P-51 fighter pilots could launch escort missions alongside B-29 bomber missions over mainland Japan. Mustang escort was crucial to the saving of countless air crews, and the emergency landing point afforded by the airfields at Iwo Jima saved many more.

I am not a nationalist, nationalism is the sentiment that brought us conflicts like the second world war. The notion that might makes right, and that certain people are less valuable because of their ideology, religion, or the color of their skin is poison to the peace loving people of this world. I find these notions repulsive, and anyone who uses any reason to justify such thoughts is equally abhorrent. I am, however, a patriot, and have the utmost respect for anyone who puts on the uniform in the defense of freedom, justice, and liberty. In the ashes of World War II, racial and national hatreds were eventually set aside, at least to some degree, and an important understanding was established between the former Allied and AXIS powers. That is the true legacy of those that fought in WW2, on any side, they fought, they bled, and they died so we could, as a human race, realize that this cannot happen again.

Today marks the 73rd anniversary of the raising of the US Flag over Mount Suribachi. Sadly, not many of those that fought to make this happen are still with us. I invite you to honor them, as I do, in the solemn hope that one day there will be no need for Marines. But until that time comes, I am very glad that when the chips are down, there are men and women still willing to rise to the challenge.

Semper Fi.

Custom MMC Console for Active Directory Management of External Domains

Ugh, what a title…..

The Client

A client of mine is on the road to recovery. I have thus far, taken them from about 1998, to roughly mid 2000s status in terms on IT practices. I like working for this client, they are a quirky bunch of people, and have managed to create one of the finest examples of wildly unkempt, organic IT growth I have ever seen. They have survived thus far by paying so called professionals to put out bush fires. They simply had no idea any other alternative existed. I have convinced them that IT doesn’t have to be so painful.

The Problem

It is time to roll out Active Directory. The vast majority of their machines are home versions of windows, so they won’t be joining the domain any time soon, but we can at the very least bring some sanity to the file server environment. Right now, they have two file servers, and employees named Steve log in with usernames like Brittany, who hasn’t worked for the org in three years. No one knows how to change passwords, nor create new accounts. At the same time, I am rolling out useful internal tools such as a wiki,  and a trouble ticketing system, all authenticating against AD/LDAP. Less passwords would be great here, this place is awash in a veritable sea of sticky notes.

A few of the employees are proficient enough that I can grant them the ability to manage basic AD functions, such as account creation, and password resets. However, they all have machines that cannot join the AD domain due to them all being home versions. Sadly, that is not going to change for some time. Baby steps here, folks, baby steps. So, I need a way for them to authenticate against the AD domain, launch MMC, and retain saved settings for AD management.

The Solution

The first issue is that MMC requires an account with local admin privileges to even start. Firing it up locally presents us with the friendly UAC. Fine, great. So, I snap in the AD controls, it gripes because I am not a member of a domain, so I tell it to change domain to my client’s (Via a VPN, don’t panic, I’m not grotesquely stupid). I am informed that my username or password are incorrect. This is because MMC is running as the local privileged account, not one that was successfully authenticated against the remote AD domain. We can use runas to resolve this:

So, we can just make a bat or ps1 file, and have the user run that, right? Wrong!

Open a powershell prompt, and try this, it will fail. You will be informed that the operation requires privilege elevation. Start a powershell prompt as an administrator, and try again, it will work fine.

But I want to make this into a button that a non-technical end user can click. I can train them how to change passwords, I will not be able to teach them command line anything. They’ll write it down, and then never do it, opting to instead, call me every single time.

Okay, so I’ll just go into the shortcut settings, and tell it to run as Administrator. Except, Windows won’t let me check that option in this particular case. I have no idea why, and now that I have a workaround, I don’t much care.

First you need to prepare the MMC console, as one spawned naked isn’t useful to a non-technical user. Launch an administrative powershell prompt, run the little ditty from above, and snap in all the appropriate tools. Connect them all to the correct domains. Make sure you select all the check boxes that say, “Save this domain setting for the current console.” Then save the console settings somewhere reasonable. This makes sure your end user won’t have to do this work every time.

Now create a ps1 file that looks like this:

Save that somewhere sane, create a shortcut somewhere that makes sense for the end user, and then be really nice and edit the friggin registry to make “Open” actually execute ps1 scripts, and edit send them to notepad. Why this isn’t the default, I have no idea. Here is how you do that btw:

Then get all fancy, and change the icon of the shortcut, and there you have it, problem solved. Non-technical users can now be easily trained to reset passwords, and have a button they click on that lets them do so. Wheeee!


The Night After X-Mas

Twas the night after x-mas, post-consumerist boom,
Not a synapse was stirring, and this makes me fume.

Debt was amassed as gadgets were bought,
And the fury of installation would soon be wrought,

Upon our humble narrator, he fixes all things,
Like the stupidity of all the world’s ding a lings.

Like little Suzy’s iPOD, it played no new jazz,
For she had not read the manual, what a stupid little spaz.

She lamented and cried, and let loose a shriek,
Without my new iPod, I can’t be unique!

She dashed to her phone, and my digits she dialed,
As I answered the phone, my fury ran wild.

Tech support I answered, how can I help you this day?
You fix my iPod mister, I demand things my way!

You fix my new toy, or I’ll cancel my service,
I could tell from her voice she was a bit nervous.

I let out a sigh, and I said, do you suppose,
You forgot the power cable – it needs one of those?

Silence I heard, and then a slight scuffle,
Then bad music, some ghetto-rap shuffle.

You fixed my iPod! I love you to death!
You are so welcome! “Fucking idiot,” under my breath.

I hung up the phone, but it rang much, much more,
and from all this, there is one thing I adore.

Self sufficient people, and instruction manual readers,
To me, they alone should be allowed to be breeders.

So if you have ever called my number, which I suppose is your right,
Eat shit, goto hell, and I hope you die this very night!