Authentication Factors For The Non-Technical

I have explained multi-factor authentication(MFA) to several people now. Most of them from my non-technical friends pile. Nearly all of them have requested additional information. I am either very convincing, or the world is indeed on fire. As such, I have decided to write this blog post. Hopefully someone will find it useful.

First things first, a disclaimer. I am not a security expert. I consider myself a security adept, an aficionado, or perhaps just a systems and network junkie who happens to care about security. There are much more in depth articles on this very subject, most of them fairly technical. My goal is to keep this post for the layperson. I would not mind if a few of my more security as a day job friends gave me their two cents on the statements I am about to make.

Here we go.

The first question a lay human may ask is, why does authentication even exist? What authentication boils down to is quite simple: it exists so that a computer and/or network resource can determine who you are. Based on that single fact, it can then perform the later stages of the entire login process. Namely, authorization: what resources are mine, and what actions am I allowed to perform? And accounting: What did I do while I was logged in? These three things are often collectively referred to as AAA.  To ask what authentication is may sound like a silly question to many, but I have several clients who have worked very hard to eliminate authentication layers. Usually because they only see it as a burden. They do not see, or have not been shown, the advantages that proper authentication, when combined with authorization and accounting, can bring. Security be damned, proper AAA can actually make your enterprise work environment flow better. Without going too far off topic, done right, AAA can ensure that the right users can get to the right resources; meaning they can do their jobs effectively.

Dear Infosec peeps, I know, I am really broad stroking it here, as well as taking some liberties with some terminology. My goal is to explain the concepts and advantages, not the technical details.

What does this mean? When a computer or network asset knows who is requesting resources, it can deliver things like proper files, correct bookmarks for your web browser, grant access privileges for protected resources, your favorite songs, and even display your favorite background image. Everything about that computer service that relates to you, is tied to your digital identity, and the computer learns that identity via some authentication process. Some of this information is very innocent, like a background image. But a great deal of it is, to some degree, sensitive. A supervisor planning to fire Joe in accounting doesn’t want Joe to be able to read his emails. Authentication exists so that Joe can only prove to the computer that he is Joe, and not whomever he wishes to get some dirt on this week. Joe is a dick, and deserves to be fired. No, not you my actual friend Joe.

This of course leads to the question, how does a computer know who you are? The short answer is, it really doesn’t. It instead asks you to identify yourself, and then prove it. This is called a challenge response. Declaring your identity to a computer is fairly simple, the far most common method is to present a username. The username is your identity, not part of the proof of that identity. A username should never be considered a secret, I am nuintari nearly everywhere I go, with very few exceptions. This is hardly classified knowledge, and if it was, it would be a shitty example at that. Plenty of places auto assign numbers as account identifiers, but account identifiers are frequently not easily changed. If I needed nuintari to remain a secret, I am pretty sure I am fucked by this point.

In response to your proposed identity to a computing resource, there are three challenges that the computer can present for you to prove you are indeed who you say you are.

  1. Show me something you know (That I also know).
  2. Show me something you have (That I know you have).
  3. Show me something you are (That I know about you).

This is often stated in security circles as, “something you know, something you have, and something you are.” These are considered the three vectors of proper authentication. Each one has advantages and disadvantages. For sake of flow in this article, I am going to address them in reverse order.

Proving something that you are, is by far the hardest type to scale at a network, or internet scale. Something you are is often also called biometrics. Something about your physical self that uniquely identifies you as you. This is things like finger/hand prints, eye scans, voice stress analysis, and urine samples. Some of those were made up…… The reason these are hard is because they have to be distilled into something that can be potentially sent over a network, understood by a computer, securely in its own fashion, and be properly interpreted in such a way that a correct result is returned. All while maintaining that simply sending the correct digital message is not sufficient. Ergo, you have to prove you scanned your thumbprint, not just send a faked out image of your thumbprint. This exists, sort of, it tends to be expensive. It also is the least well supported. I highly doubt Facebook will ever accept semen sample authentication. Wait…… Facebook would totally do that…… You heard it here first, folks! Suffice to say, proving something you are is far more common in closed systems. Where all end points are controlled by a single entity. An example would be a place that requires hand print access to open certain doors.

A significant point to make about biometrics, that is often overlooked, is that they can never change. This may be considered a strength, or a weakness. When was the last time you changed your fingerprints? Again, this is both a strength and a weakness, and has to be evaluated when designing a security system. Broken record time: beyond the scope of this article.

Something you have is a little easier, and wonderfully, becoming far more common! This is tying a device to your account, and proving that it is in your possession at the time of the authentication request. The most common way most end users see this is the all so common RSA tokens that provide a one time password in the form of a short numeric code. Newer methods involve Yubikeys, or FIDO compatible devices.

left: Yubico Neo
right: Feitian Multipass FIDO

Both devices present a response to the computer resource that you are who you say you, because you had previously agreed to tie that specific device to your account, and present it on demand for authentication. Yubi and FIDO work differently, but both resolve this challenge reasonably well. I have one of each for a variety of reasons, all beyond the scope of this article(I also have a backup in a fireproof safe, again, beyond article scope). I would be happy to do a rundown later, should someone want to read my ravings on that particular subject.

The most obvious reason to not like this is simple, you have to have a thing you carry around with you. We all carry wallets, key chains, and purses anyways, so get over yourself. Hardware tokens also come in a variety of qualities, and some can be cloned. Remember college? Those key fobs to get into the dorms after such and such an hour? Show of hands, how many of us had a fob that accessed _everything_? Oh, right, lack of a live studio audience….. trust me folks, it is a huge number of people. Basically, buyer beware, you probably get what you pay for.

Another form of proving something you have, is to use your cell phone, or other mobile device. The best example of this is the Google Authenticator App. Like the physical keys, you tie your mobile device to the asset you wish to connect to, and will be presented with a periodically changing code to enter when authenticating. Of the three, this is by far the easiest and cheapest to get started with, but also the least secure. Cell phones get stolen, and are far, far easier to match to an account than a random key chain thing. Also, you will frequently access computer resources, from the very device that is proving who you are to that very resource. Mind you, this is still far better than just passwords……

The far most common is number 1, something you know. I am talking about passwords. The exchange is very simple:

Computer: Who are you?
Me: nuintari
Computer: Yeah? prove it with a password:
Me: myUncleBlowsGoats
Computer: Okay, I accept that you are nuintari, have fun!

The biggest problem with passwords is very plain to see, you all now know my super secret password. Passwords are just text, and can be shared. They can be sent via chat messages, accidentally typed into the wrong (possibly malicious) website. They are easily leaked, easily shared, and, if bad enough, easily broken given enough time. Some websites store passwords in plain text form, or in a very broken hashing algorithm. This means if someone pops their user database, they get your password too. This is why you should never re-use passwords, because the first thing a bad guy is going to do is try that password they lifted on other services you may use. Not only do passwords suck, but we suck at making them up. I’ll spare you the details, XKCD said it best.

So, what I am, the lay person to do?

The single best way to prove your digital identity to a computer asset, and prevent anyone else from doing so, is to insist on at least two of the above methods. This is called Two Factor Authentication, or 2FA. The most common method is to take authentication factors 1 and 2, and combine them. The authentication conversation then becomes this:

Computer: Who are you?
Me: nuintari
Computer: Yeah? prove it with a password:
Me: myUncleBlowsGoats
Computer: Yeah, that is disgusting, show me your key.
Me: insert key, push button…..
*crazy crypto computer stuff happens*
Computer: Okay, I accept that you are nuintari, have fun!

This may not seem like much, but this is often the difference between a compromised account and nothing. A leaked password is still useless when presented with a second factor, and the end user can be informed that their password has been guessed correctly, many times, but the physical hardware device has never been presented. The end user can change passwords, with the sound mind that they dodged a bullet.

The flip side is also true, stealing someone’s hardware token is useless without also getting their password. Strong passwords are still very important in this scenario, because a lost hardware token can be revoked and replaced, so long as the password was not also lost, or just painfully obvious. This is also where it helps to have a backup token, stuffed inside a fireproof safe somewhere.

*ahem* Regarding passwords, street address, lower case, no spaces….. NOT A GOOD PASSWORD. Nor are phone numbers.
/me glares at clients $wePublish, $wePublishToo, and $weLawyerStuff…..

Using biometrics is also an option, but again, is the hardest to leverage at network scale. Using all three is also an option, making it three factor authentication, or 3FA. I have seen this done well before, it was impressive. As I stated earlier, biometrics work better in more closed systems, where the exchange of data can be more tightly controlled and trusted. This is hard to do on the public internet.

Some of you may be thinking to yourself that your bank does 2FA already. Sadly, you are likely mistaken, banks have some of the worst authentication systems for what should be extremely well protected assets. The most obvious is when they send you a code to your cell phone via a text message. This may seem like it is resolving the 2nd proof, but what it is actually doing is proving the 1st proof….. twice. You present a username, a password, and they send a code to your cell phone, which you also present. This used to be considered a valid form of the 2nd proof, it is no longer considered sufficient. What this is is now know as 2 Stage Authentication, of 2SA. You are still simply proving something you know, it just so happens that you only recently learned one of those two things. You haven’t actually tied the device to the account, you have tied something the device can learn to the account. This may seem like splitting hairs, but sadly, phone systems can and are compromised. Intercepting that code can be accomplished by a variety of means. In short, you don’t need the device that gets the code to prove identity, you just need the code, and the code can be separated from the device.

This may lead you to cry, but what about the Google Authenticator App? It does the same thing! Actually no, when you set a new account within that app, a secret is constructed that identifies not only the account which will need the code, but also what device the code is coming from. That secret is used to generate the periodically changing one time passcode on display. In short, if you have everything in Google Auth, and get a new phone without backing up and transporting those secrets, you are going to have a bad time. The difference between the Google Auth App and simple SMS based verification is that the Google App does indeed prove beyond a somewhat reasonable doubt that the correct device was used to obtain the one time passcode. The code can be separated from the device, but the task of doing so is significantly more difficult to accomplish.

This is not to say that SMS/2SA based verification is not worth doing. If it is the only option presented beyond passwords, you should very much take advantage of it. But ideally, the Google Authenticator app should be considered the base, bare minimum, and a physical hardware token, the gold standard.

In the end, I would strongly recommend everyone seriously look into 2FA. Google’s new Advanced Protection Program costs you the hardware, and a bit of time to adapt your habits. The devices you utilize for Google’s protection can be used for other services as well. Buy two hardware keys, authorize both of them with as many services as you can, then toss one into a fireproof safe. Congratulations, you do this much, and you are profoundly more secure than the average Tom, Dick, and Harry. You will find very quickly, that authenticating with the hardware device is very simple, and non-intrusive to your daily life. You’ve added five seconds to a task, and in exchange, reduced your potential ulcer count by a shitload.

Now, stop reusing passwords, and start using a secure password manager…… That is another article entirely.

Junos Groups Part I: Basics

On my many IT adventures, I see issues, one of the biggest, is lack of network consistency. Network ports configured one way, others configured another, VLANs trunked to parts unknown, none alike, even if they share the same basic role.

Juniper Junos has a wonderful tool, that seems incredibly underused, that largely resolves this: groups. Groups are awesome, they enforce consistency, they reduce typing, and they make configs shorter, and easier to read.  Here is a quick example of how to use them to manage VLANs on a switch.

Groups are basically templates of configuration settings that can be layered on top of any section of the Junos configuration with the apply-groups statement. There is some globbing support, allowing you to fine grain control when groups are applied. For more on that, check this Juniper article.

Groups essentially follow any subsection of the Junos configuration stanza, with matching patterns in place of variable data, such as interface names, ASNs, OSPF areas, etc. Here are two groups that apply Ethernet settings, namely member VLANs, and port mode, to an any interface they are applied to.

 

In order to apply these groups, we can apply them directly to a few interfaces with the apply-groups statement:

 

In order to see and verify the applied group settings, we pipe the show configuration command to display inheritance:

 

Using groups, one can greatly simplify the configuration of a Junos device, while at the same time enforcing consistency. Groups are not limited to interfaces, and can be applied to virtually any section of the Junos configuration.  In the next part of this series, I will display some more complex examples. Please check back soon!

The Lawnmower/Stereo/Bikini/Chicken/Sig Sauer/Beer Incident

I have told this story many times over the years, but never actually tried to write it down.

Framing Elements

A long time ago, before portable compute devices with more horsepower than anyone ever needed were commonplace, and long before the first Bluetooth speakers, there was a man with a dream. Or maybe it was a woman, I suppose you really need to figure out what I was wearing the day I came up with this idea, and what you think of gender identity issues, and whatever, it was me, okay? I had the dream. It wasn’t a grand dream, it will never change the world, but it was fun, and it was mine!

The dream was a riding lawnmower with a kick ass stereo system, that had wireless, and streamed MP3s from my home file server, and operated completely hands free.

The hands free part was either a design goal, or just me admitting that I had no desire to get X working on a portable LCD screen, much less pay for such a beast in 2003, back when computer hardware was far less disposable, and rarely cheap. At this point, I think I still had 486s in service at home, and at $dayJob. In fact I know I did, the original billmax.wedroppackets.net was an AMD 486 piece of shit from hell.

The lawn mower was, and is to this day, a Toro Z4200 Timecutter. A zero turn radius, 42 inch deck, gas guzzling monster, that cuts through my 1 acre property in less than an hour……. I call her Rachel.

The Stereo

So, with the design goals in mind, prepare to be amazed with my oh so awesome solution. The stereo itself was a pair of cheap speakers, requisitioned from my first dual cassette tape deck, circa 1989. The compute workhorse was a Soekris Engineering Net4501, with a MiniPCI 802.11b wireless card, and a 3.3v PCI sound card.

For those that remember such hell, the Soekris boards were notorious for not actually providing 3.3v on their single PCI slot, they were also notoriously interupt craaaazy. Finding a sound card that actually didn’t wig out at being under-powered…. and still worked under OpenBSD, took some doing. Yes folks, I was over Linux even back in those days. Linux sucks. I wish I could even remember the make and model, because I went through hell, as did my wallet to find such a beast. Trust me, it exists, it was not easy to find.

A little electrical glue provided the rest, a dc-to-dc step down converter got me the power I needed from the mower battery. My childhood as the son of a journeymen electrician has been good for a few things in my life. Operation was simple, a simple quick release wire snap provided the connectivity to the battery, it was technically possible to run the stereo without the mower running, but why would I ever do that? The modus operandi was to connect the power, go inside, grab a beer, come out, fire up Rachel.

Meanwhile, OpenBSD booted, hopped on the household wireless, mounted $fileServer0:/home/nuintari/media/tunes via NFS, read only, of course, and grabbed a playlist. From there it was just mpg123 (or was it mpg321? I forget).  Tunes soon started flying out the cheap cassette deck speakers, and yours truly would proceed to enjoy a relaxing hour or so of yard work and beer.

Rock and Roll!

Pre, The Incident

My wife is afraid of birds, royally terrified of birds. Have you ever seen how I react to spiders? Imagine that, but with birds, it is that level of terror. Actually it isn’t, my wife isn’t the bloomin’ coward I am in the face of her fears. But, she is not a fan of them to say the least.

We live in the country, or….. maybe right on the edge of the country. As country as Northwest Ohio ever gets is the point. Country enough that the neighbors raise chickens. Chickens that are mostly free to wander, and return to the hen house at the end of the day. How they weren’t all eaten by foxes, I will never know. But, they did seem to have a thing for my lilac bushes. They would wander across the street, and nest in my lilacs. My wife hated this, she’d be out in the yard, and a chicken would appear out of nowhere, and my young, young, gorgeous lady would lose her shit and run inside. I would of course, be dispatched, usually with some kind of makeshift polearm, to shoo them away.

Occasionally, I would notice them while mowing the lawn. Rachel has some oomph behind her, and if you kill the blades, and pull the deck all the way up, you can move at a solid 12+ MPH….. with the wind. Fast enough to chase chickens. Not fast enough to catch them, not that I ever wanted to, but fast enough, and loud enough, to chase them away. Also, good for a solid laugh.

We were the new couple in the neighborhood, and the farm across the street was our only real neighbor. Turns out, they had a daughter graduating high school. We were invited to the party, which we wholeheartedly accepted on the assumption that there was likely to be beer. And, I guess we should get to know the neighbors or something.

Over the course of a fine afternoon, the father approached me, and informed me that, “I see you chasing my chickens, they give you any trouble, just shoot em, they’re good eating!”

I should point out that I live NORTH of US-6….. which anyone from Ohio will recognize as the actual Mason Dixon line of demarcation between civilization and Hicksville, USA. Someone will hate this bit, but I don’t care. South of Six Hicks are a thing, and we were a solid 40 miles North of their territory, spooky.

I should note that this phenomenon exists only in Ohio. Once you reach Kentucky, the hick meter resets back to a sane level, people are way nicer, and supremely less racist. South Ohio sucks ass.

Now, I have zero interest in shooting a chicken. For starters, I own a few guns, none of them suitable for avians. Can you imagine actually hitting a chicken with a 12 gauge? Or a 7.62 SKS? It’d be feathers and a fine mist. But, even assuming I killed it, and left it intact, who wants to clean it? My old man took me hunting a few times, cleaning the carcass is the nasty part I never want to experience again.

The Incident

This part is actually pretty short, the lead up is what makes the story funny.

The stage is set, Nuintari, the man with a dream, is riding a hacked up stereo laden lawn mower, listening to classic thunder, and of course, I have a beer , and I am wearing daisy duke shorts, and a bikini top. It is either truly awesome, or truly awful to live next door to me, even if the houses are fairly far apart.

A chicken waddles over the street, through my side yard, and right into my lilac bush.

It should be noted that at this time in my life, I had come into possession of two key items relevant to this story. A Sig Sauer, P229 9mm handgun, and a pile of 9mm blanks. Remember, I don’t actually want to kill the chicken, I just wanna fuck with it. Also, I am drinking.

I know, I know, I know, I shouldn’t mix beer and guns….. It hasn’t happened since…… that I can recall.

So, naturally, inside I go, grab the gun, a fresh beer (I know, I know), and load the weapon with blanks. Upon returning to Rachel, the stereo is now beginning to play Wagner’s Ride of The Valkyries. It was so on. Deck up, blades off, LETS GET THOSE CHICKENS!

The next few minutes is basically me, in a Bikini top, daisy duke shorts, driving a zero turn radius mower, with a beer in one hand, a blanks loaded 9mm handgun in the other, rocking out to classical German musical great Wagner, chasing a chicken around my yard, occasionally taking potshots at it with the blanks…… and of course, laughing like an idiot the entire time.

At one point, I caught a look from the farmer across the street, who was basically, as the kids say, “losing his shit.”

The Legacy

The stereo blew up. A victim of a replacement battery, and operator failure to observe reversed poles….. oops. It has since been replaced with a smart phone, a bluetooth headset, and Pandora. Not as sexy, but it works. The neighbor moved away, the chickens are all gone, the farm is largely empty these days, some days, I can chase a Killdeer around a bit, but it just isn’t the same. Killdeer fight back.

That Time an IT Emergency Made Me Sneeze Blood

Due to popular Twitter demand, you all apparently want to hear this tale. Warning, it really isn’t all that gruesome, but should probably serve as a cautionary fable for anyone who has decided to get into the magical world of consulting. I am also under an NDA, so the names have been changed to protect the grotesquely stupid, and I sadly, do not have any photos.

The Situation

This is a client I started working for about a year ago, mostly network stuff. They brought me in to reign in the insanity that is intrinsic to small <redacted> industry IT (hint, all IT sucks). One of the first things I did was whip out a label maker, and label patch cables everywhere I could find them, this saved my butt in putting this all back together later.

This particular small shop had a single rack for their IT assets, tucked into a back store room. This rack had many, many, many issues. I guess it is time for a bullet list.

  • 23 inch rack, nothing in the rack was wider than 19 inches. So, multiple 2 inch spacers on each side, top to bottom.
  • Two post rack, plenty of stuff that really screams four post. At least it was all at the bottom.
  • Cheap, flimsy construction, this thing would wobble even without the 2 inch spacers.
  • Filled to the brim, stuffed.
  • Bolted to the floor, a badly poured concrete slab that had clearly been laid down in winter. Stomping your feet made dust appear.
  • A stiff breeze caused this rack to wobble in all directions.

Between the shaky rack, and the shitty foundation, it doesn’t take an idiot to realize that the bolts holding this thing down were slowly wiggling themselves free. I told them a year ago, this is going to fall, and it is going to suck. They dismissed my warnings. Oh, I should have walked then.

Friday

They call in the AM. “We are completely down, our rack of servers fell over!”

“Yup, lemme grab my drill and my crimpers, I’ll be right in.” I replied.

Coffee to go would have been appropriate, but I had a cup of traditional at home first. My E-rate doesn’t start until I arrive, and I warned them, I fucking warned them.

Also, I knew what I would have to do.

Sure enough, the rack had ripped the bolts straight out from the floor, and collapsed. One Dell something or other is not in good shape, as it took the brunt of the fall. The rest looks like it might be alive.

I tasked one of their underlings with testing cables, anything that cleared gigE/voip on the the Paladin was re-usable, at least for now. I got to work on the rack itself. Fortunately, I only had to make six new cables by the end of this mess. No, underling didn’t know how to do that, and I am not a teacher when shit is hitting the fan right after shit has hit the fan.

So, four big ass 3/4 inch bolt holes in the floor, blasted out to all hell and back like incels think happens to lady bits if they dare have sex with someone not them. Yes, this rack really needed something bigger, 1 1/4 would be a solid minimum, but, I don’t carry concrete bolts in my Network/Systems/Security IT kit. Shit, I don’t have those in my house. But I do own a drill that can eat concrete. Thank you very much DeWalt for making a beast of a monster that I can afford. Also, my years in WISP land left me with a collection of masonry bits. LETS DO THIS.

Relocate the rack a few feet over, and mark out my holes. “NUMBER ONE, ENGAGE!”

This is where the shitty foundation starts to matter. In addition to not typically carrying concrete lags in my standard IT kit, I don’t normally bring a hazmat mask. This concrete slab had clearly been poured in the winter. For those not familiar with construction, masonry, or physics, water freezes, water is a critical component of concrete. When you lay concrete in sub zero temperatures, you get some bad shit, like a lot of dust, uneven level, and an overall shit pour.

I spent the next forty five minutes creating dust storms in my face, drilling out four holes in shit concrete with my barely adequate DeWalt Doomhammer.

I inhaled a small quantity of dust, it sucked. Then I had to make six replacement cables, and trace out shitloads of stuff that had come loose, and test. I was there for just short of three hours. Maybe 2 hours, 40 minutes. We got it done, they didn’t lose an entire day. I’m good, yo.

The Aftermath

I felt like shit, I had clearly inhaled a great deal of dust. But the next morning…… Dear god. Sneezing up blood, repeatedly. That was not fun. I still feel like ass, my nose and throat are clearly irritated beyond belief.

The client has already contested my bill. My emergency rate is always in hour increments, rounded up, no exceptions.  This particular client has a signed contract stating this, so I will get my money. But, that isn’t the point. The three hours of E-rate have no chance of addressing any possible health complications I might encounter because of this mess. Yet, here they are trying to claim they only owe me for two and a half hours, not a solid three. Now, I find myself looking for a legal way to make them responsible for the hell that is my lungs right now.

The Moral

And the moral of this motherfuckah is, ladies make em……. no wait, that is Prince.

Don’t let a company fuck with your health, they will happily do so to get what they want. I am currently updating my contracts to include personal health and danger clauses.

Organizations will not look out for you, you have to make sure you are looking out for yourself. Do not make your health a lower priority than your dedication. It isn’t worth it.

The Iconic USMC Moment

Today is a significant day in history, an iconic day for the United States Marine Corps; the day the Marines took Mount Suribachi, and performed the  now famous raising of the flag. Now, I know almost everyone at this point knows that the event was at least partially staged, but that is not the point. A lot of Marines died taking that mountain. By this time in 1945, support for the war back home was tenuous at best. A great photo, propaganda that it may be, was what the home front needed to revitalize support for continued warfare. Furthermore, a metric shitload of good US Marines died to make that staged photo happen. Today, it is emblematic of the Corps, one cannot imagine rough and tough Marines without eventually seeing this image in your mind. But I am not going to debate the merits of wartime propaganda, I was hoping to instill a bit of my historical knowledge on this subject.

Mount Suribachi sits at the southwestern most corner of the island, at a point known as Point Tobiishi. Elements from the 3rd, 4th, and 5th Marine Divisions were landed at two beaches, on the southern, and western edges of the island. Being a prominent high point on the island, Japanese positions had full view of both beaches, and the vast majority of the island. Marines were under artillery, mortar, and machine gun fire before they even hit the beaches, yet they pressed on.

Mount Suribachi is a honeycomb of caves, and the defenders took excellent advantage of this. Despite extended aerial bombardment by the US Army Air Corps, non-stop naval bombardment from the US 5th fleet, and close air support from Navy and Marine pilots, the enemy resisted, and held the peak for five days. All the time, devil dogs on the ground fought for every inch of land, under constant enemy fire.

Anyone who has ever seen an Iowa class battleship, or a B-24 Liberator, would have a hard time imaging how anything could survive the sheer onslaught of destructive force these weapons of war could bring to bear. Yet the Japanese defenders did exactly this, and continued to effectively wage war. Tobiishi point wasn’t won with air power, it wasn’t won with artillery and naval gunfire support. It was won with tenacity. Marines, in the blood soaked volcanic ash, with Garands and grenades, fought for that key position. They did the job, they fought for their buddies, they fought for each other, and in the end, they reigned supreme.

The battle for Iwo Jima would rage on for another month, with US Marines engaging a well prepared, well entrenched, and very desperate enemy. The securing of Mount Suribachi meant that, in this hellish landscape, Marines fighting to secure the rest of the island had one less place where death could reign down upon them. We will never know how many lives were ultimately saved by the taking of that tiny piece of land. Staged as it may have been, the photo now immortalized in Arlington is a true reminder of the values of the Marine Corps. They fought, they fought for their country, they fought for each other, and they got the job done.

In the aftermath of the battle, a US Army Air Corps base was established so P-51 fighter pilots could launch escort missions alongside B-29 bomber missions over mainland Japan. Mustang escort was crucial to the saving of countless air crews, and the emergency landing point afforded by the airfields at Iwo Jima saved many more.

I am not a nationalist, nationalism is the sentiment that brought us conflicts like the second world war. The notion that might makes right, and that certain people are less valuable because of their ideology, religion, or the color of their skin is poison to the peace loving people of this world. I find these notions repulsive, and anyone who uses any reason to justify such thoughts is equally abhorrent. I am, however, a patriot, and have the utmost respect for anyone who puts on the uniform in the defense of freedom, justice, and liberty. In the ashes of World War II, racial and national hatreds were eventually set aside, at least to some degree, and an important understanding was established between the former Allied and AXIS powers. That is the true legacy of those that fought in WW2, on any side, they fought, they bled, and they died so we could, as a human race, realize that this cannot happen again.

Today marks the 73rd anniversary of the raising of the US Flag over Mount Suribachi. Sadly, not many of those that fought to make this happen are still with us. I invite you to honor them, as I do, in the solemn hope that one day there will be no need for Marines. But until that time comes, I am very glad that when the chips are down, there are men and women still willing to rise to the challenge.

Semper Fi.

Custom MMC Console for Active Directory Management of External Domains

Ugh, what a title…..

The Client

A client of mine is on the road to recovery. I have thus far, taken them from about 1998, to roughly mid 2000s status in terms on IT practices. I like working for this client, they are a quirky bunch of people, and have managed to create one of the finest examples of wildly unkempt, organic IT growth I have ever seen. They have survived thus far by paying so called professionals to put out bush fires. They simply had no idea any other alternative existed. I have convinced them that IT doesn’t have to be so painful.

The Problem

It is time to roll out Active Directory. The vast majority of their machines are home versions of windows, so they won’t be joining the domain any time soon, but we can at the very least bring some sanity to the file server environment. Right now, they have two file servers, and employees named Steve log in with usernames like Brittany, who hasn’t worked for the org in three years. No one knows how to change passwords, nor create new accounts. At the same time, I am rolling out useful internal tools such as a wiki,  and a trouble ticketing system, all authenticating against AD/LDAP. Less passwords would be great here, this place is awash in a veritable sea of sticky notes.

A few of the employees are proficient enough that I can grant them the ability to manage basic AD functions, such as account creation, and password resets. However, they all have machines that cannot join the AD domain due to them all being home versions. Sadly, that is not going to change for some time. Baby steps here, folks, baby steps. So, I need a way for them to authenticate against the AD domain, launch MMC, and retain saved settings for AD management.

The Solution

The first issue is that MMC requires an account with local admin privileges to even start. Firing it up locally presents us with the friendly UAC. Fine, great. So, I snap in the AD controls, it gripes because I am not a member of a domain, so I tell it to change domain to my client’s (Via a VPN, don’t panic, I’m not grotesquely stupid). I am informed that my username or password are incorrect. This is because MMC is running as the local privileged account, not one that was successfully authenticated against the remote AD domain. We can use runas to resolve this:

So, we can just make a bat or ps1 file, and have the user run that, right? Wrong!

Open a powershell prompt, and try this, it will fail. You will be informed that the operation requires privilege elevation. Start a powershell prompt as an administrator, and try again, it will work fine.

But I want to make this into a button that a non-technical end user can click. I can train them how to change passwords, I will not be able to teach them command line anything. They’ll write it down, and then never do it, opting to instead, call me every single time.

Okay, so I’ll just go into the shortcut settings, and tell it to run as Administrator. Except, Windows won’t let me check that option in this particular case. I have no idea why, and now that I have a workaround, I don’t much care.

First you need to prepare the MMC console, as one spawned naked isn’t useful to a non-technical user. Launch an administrative powershell prompt, run the little ditty from above, and snap in all the appropriate tools. Connect them all to the correct domains. Make sure you select all the check boxes that say, “Save this domain setting for the current console.” Then save the console settings somewhere reasonable. This makes sure your end user won’t have to do this work every time.

Now create a ps1 file that looks like this:

Save that somewhere sane, create a shortcut somewhere that makes sense for the end user, and then be really nice and edit the friggin registry to make “Open” actually execute ps1 scripts, and edit send them to notepad. Why this isn’t the default, I have no idea. Here is how you do that btw:

Then get all fancy, and change the icon of the shortcut, and there you have it, problem solved. Non-technical users can now be easily trained to reset passwords, and have a button they click on that lets them do so. Wheeee!

 

The Night After X-Mas

Twas the night after x-mas, post-consumerist boom,
Not a synapse was stirring, and this makes me fume.

Debt was amassed as gadgets were bought,
And the fury of installation would soon be wrought,

Upon our humble narrator, he fixes all things,
Like the stupidity of all the world’s ding a lings.

Like little Suzy’s iPOD, it played no new jazz,
For she had not read the manual, what a stupid little spaz.

She lamented and cried, and let loose a shriek,
Without my new iPod, I can’t be unique!

She dashed to her phone, and my digits she dialed,
As I answered the phone, my fury ran wild.

Tech support I answered, how can I help you this day?
You fix my iPod mister, I demand things my way!

You fix my new toy, or I’ll cancel my service,
I could tell from her voice she was a bit nervous.

I let out a sigh, and I said, do you suppose,
You forgot the power cable – it needs one of those?

Silence I heard, and then a slight scuffle,
Then bad music, some ghetto-rap shuffle.

You fixed my iPod! I love you to death!
You are so welcome! “Fucking idiot,” under my breath.

I hung up the phone, but it rang much, much more,
and from all this, there is one thing I adore.

Self sufficient people, and instruction manual readers,
To me, they alone should be allowed to be breeders.

So if you have ever called my number, which I suppose is your right,
Eat shit, goto hell, and I hope you die this very night!

Home Network of Doom Part 2.37: LDP Tunneling

/* This is fluff

This is going to be short and sweet, because it was as easy as tasty, apple pie. I’d been wanting to do this for a while, but working for yourself doesn’t generate tons and tons of disposable income. Being frugal sucks for the mad hacker in me, but I had been holding off on purchasing the last piece of hardware I needed. Enter the deadbeat client. Long story short, sold them on a network overhaul, fixed a few things for them. They bounced a few checks, and I am left holding a new Juniper SRX300. Gee, darn. What they paid that didn’t bounce mostly covered it, so I paid about fifteen bucks for a super nice new router.

Also, I paid for the lifetime Pastebin account, and installed a wordpress plugin to embed them here, I’ll be trying that out for configs.

end of fluff */

If you will think back to the first part of this series, my network was a ring of four major devices, 2 SRX300s, and 2 SRX100s(And the head end SRX210 that doesn’t actually speak MPLS). It looked like this:

Well, I can finally say goodbye to fastE ports in the core. I rewired the whole thing to incorporate the new SRX300, and pushed both of the SRX100s out to the edge, because I do own a lot of gadgets that only eat fastE. Also, I wanted to try out LDP tunneling. So the new network looks like this:

When I rolled this out, I deployed RSVP signaling everywhere, which meant bulk adding new LSPs manually to every existing device just to get the thing working. RSVP signaling lets you do some cool stuff,which I will cover if I ever decide to write part three of this series, but scaling it is indeed a pain. My brute force method lacked any proper planning, and left me with four ingress LSPs and at least 4 egress and transit LSPs on each device. In a proper network, you would clean this up, but I did this live while my wife was streaming Netflix(And she never noticed!). If it was this annoying at my tiny lab scale, imagine what it would be like on a large network.

Enter LDP, which has basically no options, no knobs, and has one truly endearing quality: Configuration scaling. It is so dirt simple, it is not a pain to deploy en masse. Since my SRX100s are now single homed to my core network, RSVP grade decision making is no longer needed, we just need to make sure labels get flung around properly.

As I tweak my graphviz skills, here is a quick and dirty best effort conceptual view of what I am about to describe.

Better image when I figure out how to make graphviz a bit more obedient.

Making LDP work over RSVP is called LDP Tunneling, and is very, very simple. The single homed devices simply need to run LDP and MPLS on their single line back to the core. Since we aren’t running RSVP, no label switched path statements are needed.

 

That is it, easy peasy. On the other sides of those links, the RSVP core speakers need to be talking LDP on their ports out to these edge devices:

 

Now, in order to get LDP signaled paths across the RSVP signaled core, we add the ldp-tunneling option to our LSP statements:

 

On a device that is not speaking LDP, in my case, garage-mpls4, ldp-tunneling is not required on the LSP statements(But it doesn’t hurt anything either). Commit confirmed, and you should be good to go. Don’t forget to commit!

As you can see, this greatly reduced the complexity of my LSPs:

 

Only time I see transit LSPs in my core is when I have a down link, or I am screwing around with wackiness.

Kthanxbai!

Makefile Quickies: How I Make Charts

My current general purpose workhorse is a Chromebook, a first generation Pixel to be exact. As my buddy Goekesmi likes to say, a Chromebook is a knife, an extremely good knife. Unfortunately, it isn’t much else, if I need tcpdump, I drop down to my FreeBSD beast, or my Kali rig, both of which weigh over 4 times as much as my beloved Pixel. Both are toolboxes, very excellent toolboxes, and they indeed contain very excellent knives, but they weigh a great deal. Since 90% of the work I do is on a computer geographically not near me, a Chromebook with an SSH client is more often than not, a very sufficient, and appreciably lightweight tool.

But I am also a huge fan of the UNIX command line way of doing things. I use LaTeX to make most of my documentation, and graphviz to make network diagrams and flowcharts. A Chromebook doesn’t exactly have a full UNIX userland installed, so I can’t just run dot locally. With my pension for working on remote machines in mind, here is how I make charts; I resort to a web server and a Makefile.

Makefiles are great for a variety of tasks, with the added benefit of allowing you to not remember much of anything, you just type make when you want a result. With that in mind, lets examine what led me to this course of action.

Since I cannot run the graphviz suite of awesome locally, I cannot expect to use a local image viewer to view my handiwork. And even if I could, viewing local content on a Chromebook is….. meh at best. It really is designed to be online, all the time, and it shows. The logical solution is to present the images via a web browser, because that is basically what a Chromebook is. I could manually run dot for each chart I make, and enable directory indexes on my apache server, so I can easily find and view them, one by one. Urgh, who wants to type that much, and click that much? Lets create a Makefile that not only generates all my charts, but also creates a simple webpage, so I can F5 to my heart’s content!

So here it is, quite simple, but I think it illustrates an excellent use of a Makefile that is slightly outside the box from their usual task.

 

That is all there is to it, this is how I make charts and diagrams. I tend to symlink this Makefile into whatever directory I am working on currently, so if I want to view older graphs, I just enter a project directory, and type ‘make’. It cleans up the old mess, and generates images for all the *.dot files in the current working directory. This also means that my base Makefile is consistent across uses, as I have been known to introduce, and usually remove, features from this little bit of scripty fun. At one point, I had it traversing sub directories, and generating a hierarchical series of html documents and images, but it was honestly overkill for my needs.

Hope you found this useful, maybe you’ve decided that the lowly Makefile deserves a closer look. I have found little tricks like this save me a great deal of time. Lets face it, time is the one commodity none of us will ever get back.

Semper Fi, kthanxbai!

Home Network of Doom Part II: VPLS for Maniacs

Now that you have your new mess of tangled wires, and miles of configuration out of the way, it is time to start firing up some useful services. My entire initial reason for learning about MPLS was to get VPLS. As a consequence, this entire series of articles revolves around building a giant, virtual switch that spans multiple L3 devices. This particular article will focus on getting basic VPLS working. I will go into filtering, load balancing, and other such complex topics at a later time. Conceptually, the end result of this article can be compared to a VLAN aware switch nestled comfortably on top of an existing IP network. Why this is awesome is that, once built, you have all the advantages of an Ethernet network delivered to the edge, with the ability to use layer 3 tools and techniques for management of the core.

Warning: SRX Limitations

When delivering Ethernet services end to end, a service provider would(should) concern themselves with loop detection. The most common way to achieve this would be to run rapid spanning tree protocol (RSTP). Another key piece would be dropping inbound BPDU frames from customer devices, or disabling the port entirely when BPDU frames are detected. This is done to prevent a customer from joining your spanning tree network, as spanning tree has no security options. RSTP over VPLS is not supported by the branch SRX platform, nor can you configure bpdu-block in any useful way for VPLS.

What you can do is configure a firewall rule to act as a basic BPDU Filter.

nuintari@garage-mpls4> show configuration firewall  
family vpls {
    filter flood-ctrl {
        term stp {
            from {
                destination-mac-address {
                    01:80:c2:00:00:00/48;
                }
            }
            then {
                count stp;
                discard;
            }
        }
        term a-stp {
            from {
                destination-mac-address {
                    01:80:c2:00:00:00/44;
                }
            }
            then {
                count a-stp;
                discard;
            }
        }
        term pvst {
            from {
                destination-mac-address {
                    01:00:0c:cc:cc:cd/48;
                }
            }
            then {
                count pvst;
                discard;
            }
        }
        term cdp {
            from {
                destination-mac-address {
                    01:00:0c:cc:cc:cc/48;
                }
            }
            then {
                count cdp;
                discard;
            }
        }
        term vlan-br {
            from {
                destination-mac-address {
                    01:00:0c:cd:cd:ce/48;
                }
            }
            then {
                count vlan-br;
                discard;
            }
        }
        term stp-upfast {
            from {
                destination-mac-address {
                    01:00:0c:cd:cd:cd/48;
                }
            }
            then {
                count stp-upfast;
                discard;
            }
        }
        term default {
            then accept;
        }
    }
}

Special thanks to Mike for this tip. I have included the set statements for this firewall here.

Simply apply it to a VPLS routing instance.

set routing-instances <instance-name> forwarding-options family vpls flood input flood-ctrl

With a little JunOS scripting, one could conceivably turn this into a form of BPDU Guard as well. Sadly, this does not address loop detection and protection. The other method to protect against loops would be to configure mac move protections, which again, is not supported by the Branch SRX platform. In short, don’t create any loops!

The Basic Layout

To the outside observer, my entire network behaves like a giant VLAN aware switch. I’ll demonstrate my configurations for a few devices spanning three of my more important VLANs. The whole thing looks something like this:

VLAN 1000 is my general purpose, internet access network. My workstations live here, the wifi access lives here, my various gadgets including my Roku and Chromecast live here. If I had a single LAN home like most people, it would be this network.

But since I am insane, I also maintain a network for servers on VLAN 1005. Okay, I actually maintain several, but VLAN 1005 is for all the basic services that the rest of the network needs to exist, such as DNS and DHCP.

I also maintain shared storage for servers, and my one UNIX desktop over a third VLAN, 1009. Basically a network of NFS and iSCSI traffic. This network is not handed up to the IP router, as there is no need.

I maintain about a dozen other VLANs, but for simplicity sake, I’ll be cutting them out of configurations, hopefully I don’t miss anything.

Before I even get into the config, yes, my home network is named ‘assylum,’ I name all my computers after mental disorders. Also yes, I am aware that it is not the correct spelling for asylum, it is a joke, try not to think too hard to get it. My proofreader mother thought it was hilarious.

Interface Configuration

The configurations for this entire mess are substantially shorter than the previous post, and I won’t be displaying each and every device’s configuration this time around. Interface examples will start with the IP router, and a few examples of PE interfaces.

For clarity sake, lets take a look at my IP Gateway’s configuration.

nuintari@headend-mpls3> show configuration interfaces ge-0/0/1
description core-mpls5:ge-0/0/0;
vlan-tagging;
unit 1000 {
    description assylum-nat-1000;
    vlan-id 1000;
    family inet {
        no-redirects;
        address 192.168.119.254/24 {
            primary;
            preferred;
        }
    }
}
unit 1005 {
    description assylum-srv-1005;
    vlan-id 1005;
    family inet {
        no-redirects;
        address 192.168.250.254/24 {
            primary;
            preferred;
        }
    }
}

Nothing special, just VLANs and IP. The router effectively acts as a CE device in MPLS parlance. I’ll skip any real explanation here, and assume you know how a router basically works and is configured.

Exactly opposite on that same wire is core-mpls5, the, “entrance” to my VPLS switching layer.

description headend-mpls3:ge-0/0/1;
flexible-vlan-tagging;
mtu 1624;
encapsulation vlan-vpls;
unit 1000 {
    description assylum-nat-1000;
    encapsulation vlan-vpls;
    vlan-id 1000;
    family vpls;
}
unit 1005 {
    description assylum-srv-1005;
    encapsulation vlan-vpls;
    vlan-id 1005;
    family vpls;
}

This is performing the role of a PE device in MPLS-ese. The only real thing you need to take in, is that we are declaring an encapsulation of vlan-vpls at both the interface, and the sub-interface level, as well as declaring family vpls on each sub-interface. This is all required, on platforms with greater interface flexibility such as the MX, you can have different types of sub-interfaces on the same physical port. Sadly, the Branch SRX platform isn’t that awesome. We have also declared flexible-vlan-tagging for the interface itself, which becomes more important later. This port, as it stands right now, could operate with just “vlan-tagging” declared on the interface, but that would constrain us later on, which I will touch on in a bit. In short, if you don’t need any particular configuration for any particular reason, stick with flexible, because it is just that, flexible.

For purposes of wrapping your head around it, think of it like a switch performing as a trunk port. For a counter-example, here is an EX2200 switch passing the same VLANs (plus my storage network) in a pure L2 setting:

nuintari@core-sw0> show configuration interfaces ge-0/0/0
description core-mpls5:ge-0/0/1;
unit 0 {
    family ethernet-switching {
        port-mode trunk;
        vlan {
            members [ assylum-nat-1000 assylum-srv-1005 assylum-nfs-1009 ];
        }
    }
}

Directly opposite that port on core-mpls5 looks like this:

nuintari@core-mpls5> show configuration interfaces
ge-0/0/2 {
    description core-sw0:ge-0/0/0;
    flexible-vlan-tagging;
    mtu 1624;
    encapsulation vlan-vpls;
    unit 1000 {
        description assylum-nat-1000;
        encapsulation vlan-vpls;
        vlan-id 1000;
        family vpls;
    }
    unit 1005 {
        description assylum-srv-1005;
        encapsulation vlan-vpls;
        vlan-id 1005;
        family vpls;
    }
    unit 1009 {
        description assylum-nfs-1009;
        encapsulation vlan-vpls;
        vlan-id 1009;
        family vpls;
    }
}

Pretty standard config so far, but what happens if you want to present an untagged ethernet frame directly on the PE device? My wireless access points are dumb, they don’t understand VLANs (but they have killer antenna), so I need to somehow configure a VPLS speaking interface to act like an access port on a VLAN aware switch. Question is, how do I do that?

The short answer is, you don’t. Asking VPLS to act exactly like a VLAN aware switch belies their true nature in that they aren’t exactly mirror clones. With that in mind, you can fake an access port reasonably well.

nuintari@garage-mpls4> show configuration interfaces
ge-0/0/2 {
    description assylum-wifi;
    flexible-vlan-tagging;
    native-vlan-id 1000;
    mtu 1624;
    encapsulation vlan-vpls;
    unit 1000 {
        description assylum-nat-1000;
        encapsulation vlan-vpls;
        vlan-id 1000;
        family vpls;
    }
}

In reality, this is no different than any other VPLS speaking interface, except we have added the ‘native-vlan-id’ statement.  There is another piece to this puzzle which we will cover in a bit, but what you are now performing is called ‘VLAN normalization’, and only works on interfaces in ‘flexible-vlan-tagging’ mode. The other important piece is in the routing instance configuration. This particular trick only works in VPLS instances where we declare a single VLAN. Fortunately, you can do this on a per case basis. In most provider networks, it would be most advantageous to provide a simple tube, whatever the customer puts on the wire, tags or no tags, comes out the other end the same. If you want to be play fake access port, you lose this ability. I think, this might be way wrong, I wrote this article, stopped fact checking, and posted it a month and a half later……

Routing Instances

The actual work of gluing the interfaces together across the network falls to the routing instances. There are three and a half major pieces that require a bit of explanation. The route-distinguisher, vrf-target, and site-range and site-identifier. The rest of the configuration is fairly self explanatory, this is the routing instance for vlan 1000, my general purpose network.

nuintari@garage-mpls4> show configuration routing-instances
assylum-nat-1000 {
    instance-type vpls;
    vlan-id 1000;
    interface ge-0/0/1.1000;
    interface ge-0/0/2.1000;
    route-distinguisher 10.0.7.4:1000;
    vrf-target target:65001:1000;
    forwarding-options {
        family vpls {
            flood {
                input flood-ctrl;
            }
        }
    }
    protocols {
        vpls {
            site-range 10;
            no-tunnel-services;
            site garage-mpls4:assylum-nat-1000:pe4 {
                site-identifier 4;
            }
        }
    }
}

The vrf-target is essentially, the identifier for the l2vpn. Each device taking part in the same VPLS VRF should have the same vrf-target. They take the form of target:<asn>:<uid>. Since I have been creating single vlan pseudowires, I have taken to using the VLAN I am flinging around as the identifier, but as single VLANs are not all VPLS can do, this is hardly a requirement. The ASN I used from the previous article in this series was 65001, so we use that here as well.

The route-distinguisher takes the form of <any ipv4 addr>:<uid>, but the common convention is to use this router’s loopback address, and again, I use the VLAN I am moving around on the pseudowire for the uid. The best way I can explain the difference between vrf-target and the route-distinguisher is, the vrf-target defines a VPLS instance, network wide, the route-distinguisher helps identify the individual members. With all analogies, there is more to it than that, but its a pretty small piece of VPLS. The RD matters much more when you are talking about MPLS l3vpn, and moving the same address space for disparate customers. If you elected to forego BGP signaling, and are doing all your signaling with LDP, this option is completely unnecessary.

The final pairing, site-range and site-identifier, serve to identify the individual members of the VPLS VRF internally, as well as dictate if and when pseudowires are formed. The site-range dictates the highest site-identifier that this instance will form a pseudowire against…… What? Okay, there are three common scenarios where this either really matters, or means next to nothing.

The most obvious is a point to point connection. With only two members, the site-range could be 2, with site-identifiers 1 and 2. Nice and self documenting, this VRF is a tube, what goes in one end comes out the other.

The next is the full mesh, where these settings matter the least. In a scenario where full pseudowires  between all sites are desired, one could opt to accept the default site-range (its 65,534) and use automatic-site-id in lieu of outright declaring anything. For the purposes of actually learning anything, I like to adjust all these options, lest I never figure out why you would ever want to.

Where this really matters is when a complex topology of hubs and spokes is the desired behavior. Edge devices that only connect to one or two devices in the core should not be wasting resources forming pseudowires across the network. I don’t own enough hardware to build this scenario(yet), but when I do, I’ll post some examples.

So this all being said, I standardized on a site-range of 10 for everything except the instance that carries my internet traffic. My cable modem sits in my family room near the TV, and is carried over my VPLS network to my core router in the server room, so a true point to point link, where I used a site-range of 2.  With that one exception in mind, I used my loopback addresses to determine site-identifiers. This won’t scale far, but helps keep me sane at home.

Diagnostics and a Rosetta Stone

Going along with our analogy that this is just one big VLAN aware switch, we will need some basic commands to verify the insanity we have created.

First of all, we want to see if our pseudowires have actually come up, which is pretty easy:

show vpls connections

Yes, for some reason, this one singular JunOS command comes with a built in legend. Easily ignored over SSH, really friggin annoying at 9600 bauds of serial cable love…… The most important thing to look for here is Remote PE: entries. A mis-configured instance, or a bifurcated network will instead show entries with things like “No connections found.”

Good:

Instance: assylum-nat-1000
Edge protection: Not-Primary
  Local site: core-mpls5:assylum-nat-1000:pe0 (5)
    connection-site Type St Time last up # Up trans
    1 rmt Up Mar 27 17:49:01 2017 1
      Remote PE: 10.0.7.1, Negotiated control-word: No
      Incoming label: 262153, Outgoing label: 262149
      Local interface: lsi.1048576, Status: Up, Encapsulation: VPLS
        Description: Intf - vpls assylum-nat-1000 local site 5 remote site 1
   2 rmt Up Mar 27 17:49:09 2017 1
     Remote PE: 10.0.7.2, Negotiated control-word: No
     Incoming label: 262154, Outgoing label: 262157
     Local interface: lsi.1048578, Status: Up, Encapsulation: VPLS
       Description: Intf - vpls assylum-nat-1000 local site 5 remote site 2
  4 rmt Up Mar 29 11:23:53 2017 1
    Remote PE: 10.0.7.4, Negotiated control-word: No
    Incoming label: 262156, Outgoing label: 262157
    Local interface: lsi.1048586, Status: Up, Encapsulation: VPLS
      Description: Intf - vpls assylum-nat-1000 local site 5 remote site 4

Bad:

Instance: assylum-sdc-1006
Edge protection: Not-Primary
  Local site: core-mpls5:assylum-sdc-1006:pe5 (5)
  Local site: core-mpls5:assylum-sdc-1006:pe10 (10)
No connections found.

How fortuitous! A VLAN for a dead project I have long since forgotten about, but have failed to rip out of the network layer! This also shows that a local instance can be more than a single site, more on that later….

The next thing you might care about is whether or not these instances are actually moving any real traffic, we have a command for that:

show vpls statistics

Basic counters, bits in, bits out, zeros probably mean bits are not moving. If I get to this point and get zeros from this command, it invariably means I have a typo somewhere, usually a mismatched VLAN.

Since this mess is effectively a giant, virtualized switch, you will probably want to view the MAC address table. If this was a pure L2 environment, this would suffice:

show ethernet-switching table

Sadly, this gets us nothing in a VPLS enviroment. Instead, you need to type out this handful:

show route forwarding-table family vpls

If you all you care about is MAC address entries, be prepared to visually filter a bit. Sometimes you just want to know the answer to a simple question like, “DO YOU FRIGGIN SEE THIS MAC?!?!?!” Plenty of other, occasionally very useful, information is presented here. This is another fine example of something that doesn’t paste well, so here is a text dump.

Some MAC entries have next-hops listed, others do not. In case you haven’t figured it out, this tells you if the MAC is directly attached, or comes form somewhere else in the VPLS network. Either way, it is kind enough to tell you which interface it came from. This is a fine time to yank a cable out, assuming you constructed a redundant path, and re-check the output of this command. If you have a mesh or any redundant sort, MACs will move, otherwise, they will time out and disappear.

Moof

I hate writing wrap ups. We have gone down the rabbit hole on MPLS/VPLS a bit further, and have something that is actually usable. In the next piece in this series, we will talk about load balancing, fault tolerance, and using L3 to troubleshoot L2, because directly troubleshooting L2 sucks, believe me I know. *mental note, write post on that subject, link it here (I’ll never remember that second part).*