A simple markdown editor that’s actually rather good

A short blog (actually started as tweet but I couldn’t do it justice in 140 characters) to let you know about a new markdown editor that caught my attention this week called Abricotine.

Abricotine 1

Here’s the rub

For those of you who are markdown aficionados, you probably follow a similar search for the ultimate markdown editor that is elegant, frictionless, distraction-free; all while being powerful, feature-rich and endlessly extensible. Yeah, seems like a good way to procrastinate. Of late I’ve bounced between Github’s Atom and Microsoft’s surprisingly nice Visual Studio Code.

Unfortunately most markdown editors fall into one of two categories:

  • Too cold. An editor that you write in, that can easily preview via a keystroke/menu click by opening a second window/pane to render your markdown text as an HTML page. This is nice as it allows a straightforward writing experience which stays true to what you are typing, but it isn’t particular intuitive to preview the formatting, causes a jarring switch of focus when you do, and uses double the real-estate causing your eyes to flit between the two.
  • Too hot. An editor which transforms markdown as you type, replacing the text with the formatted rich text. This saves any need to switch to a different view and seems simple for new markdown users, but the problem is you lose the formatting immediately. This approach doesn’t make much sense to me as one of the primary reason I write in markdown is its portable nature. Open, save, edit anywhere – it’s just a text file, but it retains the bare minimum markup a writer needs while still being mostly readable in it’s raw format. For example, if I just wanted a quick way to bold a word without leaving the keyboard, I’d learn how to use ctrl+B. I use markdown because it’s easier to type than enclosing HTML tags, leaves the text easy to read, while keeping the formatting separate. To me, this is where the new inline editor in WordPress gets it wrong (also, online editors take time to build up trust – gmail’s editor is about the only one I’ll not worry too much about.  Even then, if it’s more than a couple of paragraphs I reach for a real text editor). See the following animated gif to see what I mean about “too hot”.
Image courtesy of wpbeginner.com
Image courtesy of wpbeginner.com

Just right

I’ve seen some editors attempt to blend these approaches although up to now I find most don’t do it well. For example Visual Studio Code, has a strange mix of providing a rendered split window, but also tries to format things in the text entry pane but is limited by the single font size. It ends up with odd results such as applying bold and italics appropriately, but not strikethrough or underscores. Headings aren’t a larger size than body text, but a different colour – mix this with f.lux in the evening when writing and you’ll think things are very odd.

This is where Abricotine seems to get the balance right. It lets you see the formatting syntax and the resulting impact inline, without trying to hide or convert the syntax. It probably looks ugly if you’re not used to markdown, but for someone that writes that way I think it’s a perfect balance.

abricotine icon

For example, inserting images via a URL works really nicely in practice, showing the image, while still retaining the link correctly in pure markdown whenever you move the cursor over that line or copy the text somewhere (click on the image below to see it larger – normal on the left, focus on the right).

Abricotine 2

Open, lots of flavours and potential

There are lots of other markdown editors out there, but unlike most of the new high-quality options Abricotine is open source (GPLv3), cross-platform (Windows, OS X and Linux) and developed in the open on its GitHub site.

Suggestions

There are a few minor rough edges and feature requests I discovered during my initial trials (based on build v0.2.2). This shouldn’t detract from how useful I’m already finding this tool, but I thought I’d list them here in case anyone from the project is listening – yes, I know I should file these on their GitHub site 😉

  • Spell check underlining works, but there is no suggestions provided on right-click.
  • No recently opened files list.
  • I’d like to see tabbed documents.
  • Some preferences are only changeable via the config.json file.
  • Inserting a table via a menu is great (typing/inserting separators can be tedious in markdown), but the static menu options for this presently are limiting.
  • There’s a nice link to quickly view the page in your browser – it would be great to have some easily editable CSS options for its format (and the export function): https://github.com/brrd/Abricotine/issues/28.
  • I’d like to see the “save as” option suggest “.md” as the file’s extension.
  • The help’s homepage link still points directly to the GitHub project, not their new website.
  • The “Copy as html” option is an awesome idea for quickly grabbing sections of text, but it seems to do a regular copy at the moment.
  • Word count would be a nice addition.
  • Support for PDF exporting via the Pandoc library would be great.
  • The ability to do full screen for distraction-free writing is super nice, and I like the default limit on characters per line, but this should probably be adjustable.
  • Support for Grammarly/Hemingway type functionality, or ability to seamlessly integrate with either/both would be great. Not sure if they have public APIs for this.

The intermission bell is now ringing

After quite the extended hiatus, I’m back on the blogging train. Over the last 18 months, I’ve been super occupied at Coho Data as their Technical Product Manager, participated (and trained for) some ultra trail running events, and generally been busy with life – unfortunately, as a result, I let my blog posting suffer. Anyway, enough excuses. Just to let you know that I intend to get back to churning out irregularly scheduled blogs with a few interesting projects that have been percolating on the back burner long enough.

intermission

Over the last couple of months, I’ve actually been making small tweaks to this site, such as a newer theme, done some optimization work, etc, and generally getting things ready. I’ve also been blogging a little over on Coho Data’s site as well: http://www.cohodata.com/blog/author/forbes/

During the break, I renewed my VMware certification status last summer by completing version 6 VCPs for both DCV (Data Center Virtualization) and NV (NSX). As soon as the v6 VCAP-DCV Admin exam (now called “Deploy”) is available I’ll be upgrading and grabbing myself a VCIX6 if I can – no doubt I’ll publish some articles about what I learn during that journey. As a Product Manger these days it can be tough to keep my hands-on skills relevant and not too dusty, but I think the VCAP-Admin exams are a good *forcing function*. I completed both the Design and Admin VCAP exams for v5, so I was glad when VMware relaxed the rules for VCIX and allowed candidates to elect to take either option for the upgrade (see the comments in this thread: http://blogs.vmware.com/education/2015/03/migration-paths-v5-certification-v6.html#comment-20114). Great to see a large company like VMware listening to their customers on this one.

Also, last week I was delighted (if not a little surprised) to be designated as a vExpert again. I feel personally that this was somewhat of an emeritus status for me, as during 2015 there wasn’t as much going on (at least publicly on this site) on the virtualization advocacy-side. Regardless, I’m delighted to stay on the list. I find access to the NFR licenses from VMware and their partners, and access to the VMworld sessions, invaluable to creating blog content and keeping skills honed in my home lab. I will endeavour to use the vExpert badge with greater gusto this year.

I may also write-up 1 or 2 running articles this year. This is not something I’ve done before, and obviously not related to my normal distribution audience, but it might prove to be an interesting diversion. Don’t get me wrong, I’m not a good runner. But I enjoy the challenge of these longer distance trots, and I know I like reading about the tribulations of the common-runner more, as they’re usually more entertaining and instructive than those written by the athletes that operate in a seemly different world.

I don’t want to promise on any projects yet, but I’ll have some more news on some of them shortly I hope.  The one area I spent quite a bit of time on last year, although nothing has surfaced on this site yet, is a refresh to my reference cards. I’m still super keen to get out a version 6 card and plan to use the power of GitHub to allow greater transparency and collaboration from now on. <aside>I saw Duncan’s announcement this morning, and was delighted that they’ve decided to share their deepdive book in a similarly open way – I’ll need to take a look at GitBook as an option for accepting contributions.</aside>

To conclude I wanting to quote something ethereal, noteworthy and markedly considered. So I leave you these important words:

VMworld 2014

It’s a popular thing to do at this time of year so I thought I’d grab your attention for a few minutes and tell you why I’m super excited for VMworld this year.

This year will be my sixth VMworld US conference. Every year I get a different perspective. Obviously the technologies and products change and evolve, and as the conference matures and grows, it broadens its scope. It’s easy to say things like “the talks just aren’t as technical as they used to be”, but I think that misses the real value of such a conference. For me the conference’s greatest reward has always been about the opportunity to meet and chat with my peers. The chance to discuss upcoming cool technologies, the design challenges we’re facing, what solutions we’re putting together, how the products and wider markets are changing, growing, diverging and consolidating. As ever I’m looking forward to talking to as many people as possible.

Sessions

This year Scott Lowe and I will be back with a vSphere Design focused session. We’re looking at how the new software-defined approaches will impact, and can improve your vSphere design. As ever, Scott and I will do our best to get the discussion flowing in the room. This is always a fun session and I’m really looking forward to being there, talking about the design process and how things like SDN and SDS technologies are changing the landscape.

VMworld 2014 session details

There are still some seats left, so add it to you session builder line-up, come over, and join us. I was pretty humbled that Duncan Epping put this session on his must see list: http://www.yellow-bricks.com/2014/07/25/must-attend-vmworld-sessions-2014/. Thank Duncan!

 

I’m also looking forward to Andy Warfield’s session (Coho’s CTO) which is a panel with luminaries from Tintri, Pure, Tegile and DataGravity. I know most of the guys on the panel, and it’ll be great to watch them discuss where the storage industry is at, and where it’s going. Submit your questions beforehand, head along and see who’s crystal ball provides the most clarity.

Andy Warfield's session

 

 

 

 

 

Solution Exchange hall

Another thing I’m excited about this year is the experience working in the solutions exchange “on booth duty”. This will be my first year at VMworld that I’ll be attending while working for a vendor. One of the great privileges of my job as a Technical Product Manager are the chances I get to chat to users about their infrastructure needs. There isn’t a single place on the planet that’s as densely-packed with as many fellow nerds, that I can chat to about the cool stuff we do at Coho. VMworld to me has always been about geeking out over awesomesauce technology. The difference this year is that I’m expected, nay employed, to gas with others who share the same passions about smoking-hot tech as me.

Seriously, don't go to this booth. Instead, come to ours - 835!
Seriously, don’t go to this booth. Instead, come to ours – 835

Come over to the Coho Data booth 835 and tell them Forbes sent you. Chances are I’ll be there and I can show you some of the new stuff we’ll be revealing at the show.

BTW, if you’re a vExpert, sign up for some extra goodies here: http://info.cohodata.com/VMworld-2014_vExperts.html

New Coho Stuff

Which leads me on to the next thing I’m excited about at the show. Andy Warfield has already alluded to a few areas that Coho will be showcasing. We’ll be presenting some sweet previews of things like our OpenStack support and our vCenter Plugin that we’re in the process of building at Coho.

I think the really interesting bits are the early signs of where we’re taking Coho Data in the future. We already have a fascinating story around our ability to scale-out both performance and capacity, but as we look at some new hardware options in the booth you start to understand how different our offering is to other products in the vSphere market place. We’ll be able to scale performance and capacity independently from each other. We’re looking at providing all-flash storage devices that use our granular auto-tiering where SAS flash is the lower performance tier! Outrageous. Imagine being able to add 4U capacity behemoths chocked full of SATA disks totalling close to ½ PB of raw disk. All managed as one device. Like Lego bricks, you add the bits you need to fit your own environment. And when your needs change, you add only the bricks required. That’s part of the vision.

Fast, flexible, forever
No, not the American movie franchise about illegal street car racing. Coho Data’s storage.

The second bit to this is something were calling Cascade. Having a tremendously flexible storage system is cool-an’-all, but we know how complex it can be to figure out what storage is really needed. Particularly when you’re trying to predict a capacity management strategy for the next 5 years. Capacity itself is usually the easy bit, but understanding your performance needs is like milking unicorn tears. Unless you’re the bravest of technical architects, you’ll add in a fudge factor of muchos dolares. Cascade aims to figure this out for you. No need to wait until your users are hammering the service desk, complaining that their mega_corp_mission_critical_app is loading like molasses again, or their VDI desktop is now taking longer to boot up than it takes them to grab the first cup of morning joe – “who idea was it to replace my computer with this virtual thing?”. Yep, we’ve all heard it before. Cascade understands your actual workload and shows you what type of bricks you need to add. It can tell how much better your life would have been last week during that monthly billing cycle run that fell over just as you were about to tuck into that delicious double-double animal-style, if you were to add another shelf of this or that. That’s where we’re going and you’ll see previews of the hardware and the software at VMworld.

Don't let poor storage planning ruin that double double moment
Don’t let poor storage planning ruin that double double moment

Come say hi

When I’m not in the booth, I expect I’ll be flying around trying to meet as many folk as possible. There’s a reasonable chance I’ll be in the Hang Space by the bloggers table and the vBrownBag setup cranking out a blog post or catching up on twitter. Get in touch if you want to meet up.

When the conference doors are closed, I still hope to be soaking up as much as possible and talking technology whenever I can. Here’s my plan at the moment:

Saturday night

Community kickoff at Johnny Foley’s: http://twtup.com/6878fiv3e9fjrqz

 

Sunday

Hopefully I’ll be at the vBrownBags/VMunderground opening events in the afternoon: http://blog.vmunderground.com/2014/08/05/opening-acts-2014-panelists-moderators/

Then it’s v0dgeball Charity Tournament where I’ll gallantly display my distinct lack of hand-eye coordination for the Coho team.

Last, but certainly not least, in the evening I’ll be hanging out at the VMunderground event (still tickets available!):
http://blog.vmunderground.com/events/12379753175/

 

Monday night

I’m planning to start off socializing with my brethren from Tegile and SolidFire at their respective Monday night events (looks like I was too slow to grab a Nutanix ticket). I might compete with these guys during the day, but this is one big community where I know several friends that work for these companies. Fortunately VMworld is one of those special occasions where we can leave our nametags at the door and be respectful of each other.

Then off to the main event for the evening, the vFlipCup tournament and tweetup – should be fun.
http://vflipcup2014.splashthat.com/

 

Tuesday night

It’s always a crazy busy evening on the Tuesday. First, the vExpert/VCDX party to meet some community rockstars.
Then on to VMware’s Office of the CTO party in the vaults of the Old Mint Building. This event was hosted here last year and what an amazing place for a soiree. If you get invited you want to make sure you go.
After that I’ll pop over to meet my Veeam buddies (a new venue this year), and finally to the vBacon session down at the Ferry Building for some hard-earned bacon refreshment.

 

Wednesday night

I dunno. Is there a party on Wednesday night? 😉

 

P.S.

Although I won’t be there I want to give a shout out to Crystal Lowe’s Spouseivities event. If your spouse, significant other, or buddy that’s dossing in your conference hotel room while you’re geeking it up during the day, is in town; then maybe they’d like to hang out with other hangers-on. It looks like another great line-up:
http://www.eventbrite.com/e/vmworld-2014-spousetivities-san-francisco-tickets-12198817993

Deploy CoreOS into your ESXi lab

coreos-wordmark-vert-color

I’ve being reading about, poking, prodding and playing with CoreOS recently and thought it would good to document how to build a very basic clustered CoreOS setup in your lab.  In this first post I’ll describe the CoreOS building blocks and show how to deploy an instance into your vSphere environment.

What the heck is this newfangled CoreOS thingy?

CoreOS is a relatively new Linux distribution. I know, there a gazillion Linux distros out there. We need another Linux distro like a hole in head. So what’s so special about CoreOS? Well, for lots of good reasons it’s becoming the darling of cloud distributions at the moment. Just like the current buzz about the PaaS platform Docker, CoreOS is making waves as the basis of many cloudy infrastructures (due in to no small part to the fact that CoreOS runs Docker apps exceptionally well). It’s oh so de rigueur, and I know the VMware community can’t get enough of playing with the latest-and-greatest lab tools.

This isn’t your father’s oldsmobile

This new Linux platform is a stripped down distribution based on Google’s ChromeOS toolchain. It can run as a VM (or bare metal) on your own hardware, and is also available via most of the popular cloud providers. It’s initial disk footprint is around 400MB, uses less 200MB of RAM, and boots extremely quickly. It’s basically a Linux kernel using systemd (the same init system already adopted by Fedora and OpenSUSE, committed for Red Hat’s and Debian’s next releases, and begrudgingly Ubuntu’s replacement for their existing Upstart system).

CoreOS uses a dual active/passive boot image and updates by downloading a complete image into the passive partition, and activating the new version on the next reboot (Yes, just like we’re used to with ESXi! – except this downloads itself). That means there’s no concept of a package manager doing dependancy resolution, just a full image being dumped down. This makes patching easy, quick and provides a get-out-of-jail free rollback option if there’s a problem.

Applications run within Docker containers, so all the good stuff you’ve heard about Docker is already included, ready to go. And the coolest bit of all this is the inherent clustering architecture. Services and applications can seamlessly and dynamically balance themselves across multiple CoreOS nodes.

Image courtesy of CoreOS
Image courtesy of CoreOS

The Secret Sauce. Well, not so secret as it’s Open Source.  Before I launch into the lab setup, here’s a very quick rundown of the most interesting components that make CoreOS special.

etcd

etcd is a daemon that runs on every CoreOS instance proving a distributed key-value store. It keeps configuration data replicated across a CoreOS cluster, and handles things like the election process of members, allows new nodes, services, applications to register themselves in the cluster, and deals with failures appropriately.  If your familiar with ZooKeeper or doozerd then you’ll know where this sits. In a vSphere world it’s analogous to the HA service that runs on each ESXi host.

Image courtesy of CoreOS
Image courtesy of CoreOS

 

Docker

Okay, this isn’t unique to CoreOS, but it’s so fundamental to they way CoreOS is designed I thought a 40,000ft description was important.  Docker containers are based on LXC (Linux Containers) using the kernel’s cgroups to isolate resources. Docker’s purpose is to isolate applications, so in that respect you could compare Docker applications to VMware ThinApps in that they are completely self-contained apps that provide isolated sandboxes allowing you to run multiple versions alongside each other. But LXC is more akin to a very thin super-efficient type 2 hypervisor, as it can provide an entire Linux userspace to each app with namespace isolation.  So it’s more appropriate to think of Docker containers as very lightweight VMs.

Docker’s virtualized containers can start almost instantaneously, no waiting for another OS to boot.  Cluster serveral CoreOS instances together, where the docker apps can run on any node, store configuration information in a distributed service like etcd, and you start to see where this becomes interesting. Imagine an environment where your applications automatically load balance and failover to redundant, scaleable nodes, without application-specific awareness – it’s like what we’ve been doing with VMs across vSphere clusters, but practically removing the Guest OS layer and giving the applications the same mobility as VMs.

Image courtesy of CoreOS
Image courtesy of CoreOS

Fleet

Fleet is a daemon  that takes the individual systemd service run on each CoreOS machine and clusters it across all the nodes. It basically provides a distributed init process manager and it’s what gives the docker applications the coordinated redundancy and failover services.

Image courtesy of CoreOS
Image courtesy of CoreOS

Whoa, nelly!

It’s a fast-moving project with new ways of doing things so it’s taken a bit of trial-and-error testing to get things working. Certain components aren’t as well documented as they could be.  I expect that much of what I write here will also be superseded quickly so if you’re reading this a few months after I publish it then you might want to check around for easier way to do things.

 

Time to get started

Time to get dangerous in the lab
Time to get dangerous in the lab

Download the latest CoreOS release with the following link: http://beta.release.core-os.net/amd64-usr/current/coreos_production_vmware_insecure.zip

At the time of writing it was only a tiny 176MB sized download. It’s worth noting in the URL that I’m downloading this from the beta channel. In a future post I’ll explain the update mechanism and how to switch to a different release channel.

I’m going to be installing this on an ESXi host. The instructions here: https://coreos.com/docs/running-coreos/platforms/vmware/ explain how you can download and use the OVF Tool from VMware to convert their Fusion/Workstation VM for ESXi use (I’m sure you could also use VMware Converter if you had a copy to hand). But I’m just going to do this by hand using the VMDK file included in the download.

I created a new VM as you would normally do in your vSphere Web Client and selected the following options through the wizard:
VM name: coreos1
In a future post I’ll explain how to cluster multiple instances together, so it makes sense to append this first VM with a number.
Guest OS: Linux, Other 2.6.x Linux (64-bit)
Compatibility: I selected vSphere 5.0 and above (HW version 8), but you should be able to pick whatever is appropriate for your environment.
Hardware: Here I removed the existing hard disk and floppy drive and set the memory to 512MB (you could drop the RAM lower, but I wanted some headroom for additional apps I’ll be running).

VM hardware

Note: For this tutorial, you’ll need to initially put the VM on a subnet that has DHCP services enabled.

Next, I copied the “coreos_production_vmware_insecure.vmdk” file from the zipped download up to my ESXi host’s datastore.

Now we have to manually convert the VMDK file. SSH into the ESXi host and drop into the newly created coreos1 VM’s directory and run the following command:

vmkfstools -i coreos_production_vmware_insecure_image.vmdk coreos1.vmdk -d thin -a lsilogic

convert vmdk

 

Go back to the newly created VM’s settings to attach the “Existing Hard Drive” and BOOM, hit the power button.

Success

When the boot process first gets to the login screen it might not have initialized the IPv4 address yet. Wait a few seconds and hit Enter in the VM’s console until an IP address is displayed.

Initial boot

Where’s my password?

To log into your newly birthed CoreOS VM, you need to go back to the downloaded zip file and retrieve the “insecure_ssh_key” file. Use the following command to use the key to remotely login in:

ssh -i <path_to_key>/insecure_ssh_key core@<your_coreos1_ip>

ssh in

As this is just my home lab environment I won’t replace the default certificate, but obviously this is something you want to fix if you’re using this in a production setting.

The End of the Beginning

So we’ve finished the first step by deploying a single CoreOS VM. By itself it doesn’t do anything particularly exciting, but looking forward to some forthcoming articles you can start to see where things start to become more useful and have an insight into why CoreOS is becoming a tour de force all of a sudden.

A handful of the areas I’ll like to cover next are:

  • digging into the update mechanism and changing release channels
  • using the power of Docker to install applications
  • systemd and the mysterious (and largely undocumented) cloud-config
  • etcd’s distributed cluster management service
  • clustering Docker applications using Fleet

Stay tuned.