Pictured: This installation failed...
Craft's help page is woefully unhelpful about installing the plugin manually for sketch, and it only reads "We don't recommend manually installing the Craft plugins for Sketch." without any additional steps, fortunately the good people behind Invision made the plugin URL available to reverse engineering :)
- Download craft-sketch.zip and unzip it (doubleclick)
- Go to ~/Library/Application Support/com.bohemiancoding.sketch3/Plugins and drag in the Panels.sketchplugin
- Go to ~/Library/Application Support/com.bohemiancoding.sketch3/Panels and drag in the Data.sketchpanel, Duplicate.sketchpanel, Library.sketchpanel, and Sync.sketchpanel
- Launch Sketch and verify it has been installed under Sketch plugins.
Enjoy your new plugin!
Review in progress
I finally bit the bullet and plucked down $50 for a year subscription. I'm a bit data paranoid, years and years ago I lost an IBM 75 GXP Deskstar (a 45 GB HD) in early 2001. It's a funny detail that I can remember something so esoteric as a hard drive model, especially considering how many I've owned over the years but it speaks to the gravity of it.
I had a PowerMac G4 at the time, with a set of two 18 GB Western Digital HDs, a 40 GB Maxtor, and had moved to my new (fast for the time) 45 GB IBM Deskstar. Back then, I was near the absolute fringe with so much storage. All it took was self encoding a sizable collection of a few hundred CDs to 320 Kbps MP3s to fill nearly my entire 40 GB. It all came down to a realization that I had: Data storage would become so abundant that there was no reason to store my music other than at the highest bitrate. Thus, I decided I'd eat the cost upfront to save myself regret in the future.
I wasn't the only person to be burned by IBM, soon a class action lawsuit followed but the damage was done. My 45 GB Deskstar, (affectionately dubbed the "deathstar" by legions of scorned customers) was my boot drive, storing all my most important documents. While I didn't lose my music collection, the data I did lose was irreplaceable: art projects, websites, school work, among other things. My lesson was learned and data backup became part of my life. My first attempts were CD-Roms, followed later DVDs. Eventually using I started other HDs as manual backups, even Carbon Copy cloner and a RAID1 + 0, setup.
While it might sound like paranoia I had good reason to fear, my income throughout college and after was always tethered to my web projects. Even my art major, digital arts depended on a working computer. When Apple debuted Time Machine 2007, all of my previous habits were abandoned and it changed the way I fundamentally approached my computer for the better.
So as I write this, I've been using Time Machine for 9 years. Time Machine ranks as absolutely one of the best features Apple has ever added OS X. If you're not using it, you should be. Time Machine provides backup repository of your entire HD, including revision histories.
Time Machine is damn near magic but it has major flaw: Its a local only backup solution. Unless you have a friend with a beefy internet connection, a VPN, and who's willing to leave a NAS (Network attached storage) drive on 24/7 and a little OS X know-how, you're limited to backing up Time Machine only when you're physically at your Time Machine Drive's location. It doesn't take much imagination to see how this could be problematic: a catastrophic power surge could ruin all your electronic devices, frying your computer and time machine drives, or perhaps your house is burglarized, computer, hard drives and all. For these and many more reasons is why offsite backup is the holy grail.
So what is Backblaze? It'd be easy to simply call BackBlaze a "cloud" time machine, but that'd be inaccurate. Backblaze doesn't do version history, and it isn't particularly designed for single file downloads (although it can be done).
- Offsite backups
- Time Machine level of simplicity
- Backups can be downloaded or shipped to you at no cost on a USB drive (long as drive is return within grace period)
- BackBlaze has a "find my computer" feature for stolen computers (assuming your drive isn't wiped or replaced)
- Not full backup: OS and applications aren't backed up
- Backups are entirely dependent on internet speed, expect weeks of backing for drives larger than 500 GB
- No back up prioritization
- No file versioning
- Files stored for 30 days
The 30 day backup for BackBlaze is a bit tricky but basically if a file has been deleted, BackBlaze will stop storing it after 30 days, unlike say, Time Machine which will keep the file until Time Machine is forced to delete old backups for storage. Clearly, for Backblaze this is an overhead check. Storing every file indefinitely for each user is likely a very tall order. However, it's also important to understand the implications. If you use Backblaze to backup external drives, they will need to be connected to the computer in question at least once a month while Backblaze is running to reauthorize the index of the files, so don't count on Backblaze to backup data that you only access once in a great while, and if you're going to be away from your data for more than 30 days, so if you're banking on hiking the Pacific Crest Trial and stowing your laptop for 3-4 months in a friend's garage, Backblaze may not be for you.
Setting up BackBlaze requires two things: Signing up for a trial or paid account, and downloading and installing it's application. The initial takes a fair amount of time, it probably took longer for me than the average user with 6 HDs to sift through, which took about a half hour.
The installer, nearly complete
Once installed, Backblaze lives as a control panel in your system preferences. For those familiar with Time Machine, the options are similar: you can pick the drives/folders you'd like to exclude, but unlike Time Machine you can specify backup frequency, max file sizes to upload, and what speed to upload.
Get used to this moving at a snail's pace
Currently I'm only two days in on BackBlaze, and uploading around 10 GBs a day by leaving my Mac Pro roughly 16 hours a day on a medium data capping, over the weekends I intend to untether the data cap. BackBlaze is a passive experience. My intent is to update this review as my impressions.
Currently it appears that the service shoots for small files first, the first 1.1 million files appears to have constituted roughly 20 GBs of space. My best guess is that roughly 10,000 files constitute 90% of the space. I have a feeling I'm on the extreme end of who this service is geared towards. Most users are on laptops, and most laptops are on SSD, very few users probably have larger than 1 TB drives. I'm an outlier, my Mac Pro's bootdrive is a 750 GB SSD, and the backup boot drive is a 2 TB Hard Drive. BackBlaze auto-ignored my two time machine drives, and my bootcamp HD. I picked to ignore my 3 TB external drive, and my other external 2 TB HD. So in short, I'm backing up two drives since those both store what I'd consider my valuable data. Between more than a decade of shooting photos (RAW and between several iPhones) and digital music as my hobby, I probably have more irreplaceable data than most users, (sans the hardcore videographers). Will I manage to get my first back up within three months? I'm unsure.
A week later
After a week of roughly 16 hour runs on my 50 Mbps/20Mbps connection, I've uploaded 200 GB of 1.9 TB for my initial backup with roughly 300,000 of the 1,500,000 files being uploaded. I noticed by default the Downloads folder isn't excluded by default, added it which reduced my uploads by about 60 GB, a drop but in all likelihood 3 less days of uploading. So far the biggest miff I've had is there isn't any prioritization to what data is targeted first. Smallest files to largest seems like a logical strategy but I'd also like to assert some data as more valuable that supersedes the base priority, especially after the initial upload. I still have some questions about how daily backups are handled, and what happens if a file changes before a backup is complete. I'm guessing if its been uploaded, it will not be backed up until the next batch update.
Is Backblaze worth it?
Considering that Backblaze is cheaper than Amazon's Glacier, Backblaze already makes a bit more sane. However there are competitors, like iDrive which is cheaper but is limited to $35 a year for 1 TB and offers multi-computer backup/accounts. There's also SpiderOak, Tresorit, CrashPlan, Carbonite, and SugarSync. Over the coming months, I'll do a base comparison of feature sets and pricing.
Rumors aren't part of my blogging but occasionally I've been prone to rant about Apple. The iPhone 7 rumormill sparked an unusual amount of interest on my part, not for what it included but what it didn't include: the 3.5 mm headphone jack. I've always regarded the Lightning port as a senseless money grab despite mostly preferring form factor. Yesterday that rumor has shifted to slightly more sane position, the headphone jack stays! Rejoice...!?
The silliness of it all is the lightning port creates the insanity that users either must buy bluetooth headphones or worse, a dongle for their existing headphones and forgo the ability to charge and listen to the phone simultaneously. The only advantage lightning ports offer is bus powering for noise canceling headphones which can already be attained without sacrificing the 3.5mm headphone jack, and a thinner phone that's even more prone to bending. It's the same asinine behavior that lead to the new MacBook featuring only 1 USB port, requiring pretty much all users to purchase a $79 dongle for charging/video out and the same insanity that lead to the Mac Pro being mostly panned as a flop by actual Pro users * yours truly and many pundits.
Apple's opinion is ports is clear, and its disdain for modularity is frightening. While I normally agree with famed Apple pundit, John Gruber, Headphone jacks are the new floppy drive. His reasoning is flawed, very flawed.
Why would Apple care about headphone compatibility with Android? If Apple gave two shits about port compatibility with Android, iPhones would have Micro-USB ports. In 1998 people used floppy drives extensively for sneaker-netting files between Macs and PCs. That didn’t stop Apple from dropping it.
I remember 1998 too and perhaps more vividly, and everyone had Zip Drives and applications came on CD-Rom in big funky boxes. The floppy was already on its death bead, as 1.4mb was simply too small, the only things that came on floppies in 1998 for Mac users were drivers which could easily be printed on optical media. Everyone was asking for something better by then, hence why my PowerMac G3 450 that I bought in 1999 had one internally. While the iMacs did kill the floppy, they had USB, ethernet, a built in modem, a CD drive and it wasn't long before they had CD-RW/DVD drives.
By the time the MacBook Air eschewed the optical drive, anyone looking to transfer files had the internet, USB drives and networks for file transfer, all faster and offering exponential storage over optical media. Much like the iMac, the MacBook Airs even added Lightning ports, a much welcomed addition. Even by the mid 2000s, the internet had become the preferred way for most software distribution.
If anything, the 3.5mm headphone jack has hit a renaissance. My computers all have them (My media PC and Mac Pro have several). My Lightning port dock has two. My old iPod has one. My 2013 car has one. My PS4 controllers each have one. My iPad has one. My USB connected speakers, Vanatoo Transparent Ones still have one. My old Klipsch Promedias have both input and a nice front facing jack. My Numark NS7s have one (along side its 1/4 inch) and I'm not even counting the test bed of devices I have at work for development. If we went back in time, 10 years ago, my car did not have a 3.5mm jack, nor did my game consoles have them built in, and my Motorola Razr didn't either, and that's not counting the items listed above that simply didn't exist in 2006.
Outside a few fanboys, No one is asking for a replacement for the 3.5mm.
Apple switching to lightning port makes headphones incompatible even with their own Macs. I have a 2015 MacBook Retina on my desk as I type these very words and I can most surely assert that it does not have a lightning port but does have a 3.5mm jack. Same goes for my 2008 Mac Pro. On my desk I have two pairs of $200+ headphones at work that at best are going require a dongle and at worse will not be usable with iOS.
The craziest part is we're in a world where we can have our cake and eat it too. Lightning port headphones, bluetooth headphones and 3.5mm all can continue to coexist. The Verge's Nilay Patel is 100% correct, Taking the headphone jack off phones is user-hostile and stupid and I will not be buying an iPhone 7 if it doesn't have a 3.5mm jack.
Gruber followed up with the following quote:
Removing the analog headphone jack is inevitable, and the transition is inevitably irritating. This is what makes Apple different. They will initiate a painful transition for a long-term gain. Other companies will avoid inducing pain at all costs — and you wind up using VGA until the mid-2010s.
This analogy is clever if you do not understand the inherent physics of the problem. Video ≠ Audio. Unlike say HDMI vs VGA which offers inherently better picture, lightning cables do not.
Video signal transfer isn't powering a transducer. No matter the signal chain of audio, if you want to hear it you're inevitably going to convert audio to analog waveform by DAC to an amplifier then to a transducer. All a lightning port cable does is delay the conversion. If you've ever wondered why the staying power of analog has been so strong, its simply due to the physics. Using a lightning cable solves nothing, and places the amp/DAC outside something the phone already provides, which in turn equates to expensive dongles or expensive headphones. For home theaters, we use a centralized receiver that takes digital inputs like HDMI, S/PDIF (Toslink or coax), Bluetooth/wifi and USB, decodes/converts the signals to analog to be amplified and transmitted rather than sticking a DAC/amplifier in each individual speaker.
Taking this stance isn't standing in the way of progress, it's actually arguing for progress, it's an open standard that's virtually future proof, almost universal until we do away with transducers as we know them. If it was truly about thinness, we'd have a simple plug adapter from 3.5mm to a thinner variant as we do from 1/4 inch to 3.5 mm. This isn't progress, this is shackling us to a closed standard that Apple can tax.
Pictured: Scarlett features a stylish brushed red aluminum finish.
I have a bit of a history reviewing audio hardware, specifically audio I/O. Over time, the audio interface has moved away from PCIe to USB, which it now currently rests at as the defacto state for nearly 15 years after USB 2.0 became widespread. I've owned a few external boxes over the course of a decade, briefly M-Audio's precursor unit that mimics today's Fast Track (Which I returned), Yamaha GO46 FireWire, and Native Instruments Audio Kontrol, and recorded two albums using the the later two. I consider myself a bit of an audio geek, but without the audiophile trappings.
Recently I hit a breaking point, NI Audio Kontrol was able to accept 1/4 inch unbalanced cables. Mystified, I decided it was time to retire the AudioKontrol and check out the offerings in 2016. Unsurprisingly, audio interfaces offer far more bang for the buck than did even 5 years ago, at $180 I was able to score the Focusrite Scarlett 6i6, offering more high quality inputs and outputs than any of my previous devices at a lower price point. Even more impressive for $240, the 18i8 offers a whopping 18 potential inputs and 8 output buses.
The weak point of every USB capture device in my experience has and probably always will be, drivers (and USB itself). As an OS X (excuse me, macOS) user, CoreAudio has been mostly positive. Most USB devices if they're ASIO/CoreAudio compliant, drivers are barely needed for basic I/O. However, if the interfaces have custom buttons / internal routing or other features, then drivers are required. In the case of my AudioKontrol, the drivers actually were mostly negative causing glitchy behavior, and same went for my week with the M-Audio Fast Track. After dealing with years of prosumerish solutions I decided to ante-up to Focusrite, renowned for their preamps, skipping budget players like Presonus and M-Audio.
Fair warning, this as much an overview of digital audio as review. Now onto the review.
FocusRite Scarlett 6i6
Pictured: The 6i6 makes for a good speaker rest
The Scarlett 6i6 is 6 in and 6 out but that doesn't quite accurately sum up the ports. A break down includes the following:Inputs
- 2 front facing Microphone XLR/ 1/4inch Line Inputs with hardware knobs for gain control and level monitoring (Supports 48v)
- 2 1/4inch Line Inputs
- 1 stereo S/PDIF input
- Midi in
- 2 1/4inch Headphone outputs with hardware volume knobs
- 2 1/4inch Line (monitor) headphone outputs with volume knob
- 2 additional line outputs
- 1 stereo S/PDIF output
- Midi out
If you notice, this doesn't add up to the 6 outputs in the device name but instead a total of 6 inputs and 10 outputs. The reasoning is headphones/monitors are all on the same audio bus bring it back down to 6 outputs buses: one for the monitors (speakers/amp + two headphones), an additional set of 1/4 inch outputs and an SPIDF cable. Each of the headphones jacks and monitors have independent volume controls but any audio routed to the monitor outputs will be outputted to those three outputs. Also notable, the Scarlett only accepts 4 analog channels in. Most users probably won't use the S/PDIF I/O (more on that later). The full tech specs can be found here.
Focusrite surprisingly ships the Scarletts with a host of wall adapters for your country of choice but being firmly rooted in North America, I had to swap to North American standard prongs. Other than that, the Scarlett is pretty straight forward: USB cable to the computer, AC adapter to the wall, audio inputs into the device. For me, this meant plugging in my Numark NS7s into the back ports and single mic.
Pictured: The mess of cabling...
With digital audio, there's always (as of writing this) buffering which requires interjecting latency. No matter the device, there will be latency depending on the buffer size. The math to calculating minimum latency is quite simple: Buffer size/sample rate (in KHz) = latency in milliseconds.
512 samples/44.1 kHz = 11.7 ms
384 samples/44.1 kHz = 8.7 ms
512 samples/96 kHz = 5.3 ms
384 samples/96 kHz = 4 ms
However, this is only the absolute minimum for ONE direction, and lowering the buffer puts more stress on CPU to be sure that the buffer never is fully depleted. This becomes tougher to accomplish as the CPU is tasked with processing more information such as more fx and more tracks. Total travel times for buffering would like like the following:
(in) 512 samples/44.1 kHz = 11.7 ms + (out) 512 samples/44.1 kHz = 23.4 ms minimum roundtrip travel time
(in) 384 samples/44.1 kHz = 8.7 ms + (out) 384 samples/44.1 kHz = 17.4 ms minimum roundtrip travel time
(in) 512 samples/96 kHz = 5.3 ms + (out) 512 samples/96 kHz = 5.3 ms = 10.6ms minimum roundtrip travel time
(in) 384 samples/96 kHz = 4 ms + (out) 384 samples/96 kHz = 4 ms = 8ms minimum roundtrip travel time
The math above also represents the absolute minimum for travel time for external audio to travel from an input and routed to an audio output. As stated this is the absolute minimum time, the audio travels through USB for the USB clock timer, which fires at 1 ms intervals, thus there's an latency buffer that has nothing to do with audio samples but rather continuous data flow imposed by USB. Lesser devices simply use a buffer size of roughly 6 ms for each direction (I/O) which adds more travel time, whereas higher end devices will finally tune the USB timing to minimize the delay. Someone using a low end USB device with 384 sample buffering can expect roughly a 29ms delay. Higher end boxes such as the Scarlett have fine tuned drivers to shave off crucial milliseconds for the USB buffering, and also include onboard DSP to allow onboard mixing to lower travel time delay. If this all sounds a bit confusing, it isn't as bad as it sounds.
I would like to route my Mic Input directly into my output so I can monitor my inputs without having to route my audio to my computer, to the DAW then back out USB, all of which introduces a time delay, hence latency. Doing this skips the travel time through the ASIO buffer and USB Clock. The benefit is that I effectively has zero-latency for my input monitoring and my downside is that I cannot make use of any effects in realtime from my DAW.
Higher end audio interface include DSP effects that can be controlled via the software mixers so basic compression/EQing/reverbs/delays can be applied to live monitoring and/or use other interfaces (Firewire has a slightly better clock timing, but Thunderbolt provides even lower latency due to the PCIe bus clock).
All in all, the big step of buying the Scarlett line over a prosumer audio interface boils down to slightly better drivers and internal mixing.
Performance: The bits of it all
24 bit is really an unrealistic thing, it's nearly a meaningless stat when it comes to audio gear, however there are measures that more appropriately reflect the dynamic range, but to fully understand this, we have to talk analog and math.
While I may get flack for saying this, despite issues like latency, digital has had a massive leg up over its analog predecessors, not simply from an archiving/storage perspective but also quality. The much loved vinyl format, hits roughly -80db between signal to noise, meaning the signal is signal power is roughly 80 times stronger to the noise power, which isn't bad. Digital however doesn't have an analog noise floor, and sound pressures are expressed in bit depth, which is the amount of steps to current in the digital-to-analog convert (DAC).
To use an analogy I developed that works reasonably well when writing for an audio publication, Bit depth is akin to bit depth in digital imagery, instead of reflecting how many colors an image can have, it reflects how many steps in volume. Sample rate is the resolution, at which the sound is captured. What becomes interesting is that there's even a formula that explicitly tells you how the maximum dynamic range in decibels for any given bit depth. Using the signal-to-quantization-noise ratio formula: 20*log10(2^BITDEPTH-1), we can calculate the signal to noise ratio. 16 bit audio has a theoretical range of 96.33 dB, which is considerably better than Vinyl, and on par with the best of studio to reel to reel systems. Also, it's important to understand these values represent a theoretical maximum as the Analog-to-digital convertors (ADC) and digital-analog converters (DAC) rarely achieve their maximums. 24 bit audio has a theoretical range of 144.49 dB, far beyond even currently the best hardware on the market. Below I made a simple calculator to play with.
The Focusrite features 109 dB dynamic range on its inputs and outputs which is a little more than 18 bit depth. For the computer savvy 18 bit = 218 which is effectively 262144 sound level pressures vs 16 bit's 65536, or 4x times more detail. Focusrite isn't being deceitful listing 24 bit, but rather dealing with the limitations of audio production. Also notably for a reference point, the theoretical maximum for volume reproduction of 24 bit would be from silence to a NASA rocket launch (140 dB), arena rock concerts are known to be in excessive of 120 dB. It's not realistic to use the entire dynamic range of 24 bit and your neighbors would not approve if you could.
If I haven't talked sampling rates yet, there's a reason, by most accounts, bit depth matters more than sampling rates after a certain point. 44.1 Khz can reproduce 0Hz-22KHz. Capturing at 96 KHz, may actually reduce sound quality if your target format is 44.1 KHz through alias noise. The best way to imagine this is a photo. If you scale proportionally by half, the image will remain clear whereas, scaling to say, 45.9% of the image size would cause some of the image clarity to be sacrificed. The reason why in applications like Photoshop this isn't that big of a problem is through resampling (scaling) algorithms. This sample principal applies to audio, as the wave form must be recomputed and resampled, creating what is known as aliasing. Bit Depth downconversion uses dithering which is a lot more predictable as its a numeric reduction in values, where a range is compressed. Depending on your target format (movies = 48 KHz) or music (usually 44.1 KHz) capturing at 2x the sampling rate is of the target format is preferred. The Scarlett can capture 88.2 KHz but the advantages of higher sampling rate less obvious since DACs have become quite good over the years at filling in the gaps so-to-speak. What high resolution can do is capture above human hearing sounds, and more accurate articulation of the effects of things phasing. It's not night and day, and honestly, I'm mostly hard pressed to tell the difference, as are a lot of people, however, audio processing does better with denser data and the real advantage almost exists entirely in the DAW.
Since I touched on analog vs digital I figure I'll put in a quip in the long standing debate. Most of analog's love has less to with superior quality, but characteristics left due to various mediums limitations. It should be also pointed out that analog effects like harmonic distortions from tube amplification and over-saturation from tape, can and are captured by digital when recording from analog sources. For audiophiles, much of the desire is to recreate how music "used to sound", hence the love of vintage hardware. There's nothing inherently wrong with this except that it often shapes audio debate in non-quantifiable terms and often leads to absurd claims about analog vs digital. Also to add to the debate mess, has been the shift in recording techniques, mixing and mastering over time which also drastically alters the sound of a recording.
Lastly, digital for recording/listening intents and purposes exists in tandem with analog. In any audio digital path, the signal must be converted into analog electrical modulations to be fed into a transducer (speaker) or start as analog from a transducer (microphone) and have the analog signal into digital, so the devices that perform this are very important. In short, as it relates to this review, the Focusrite Scarlett quality that's professional at an absurdly low price point and it's a wonderful time for a hobbiest as digital solutions are cheap and extremely high quality. Focusrite isn't the only player making low-cost/high-quality computer audio interfaces this but it has one of the more attractive packages.
The Real world
At the price point, the Focusrite is well speced, the 2nd generation due out this month gives a modest bump, mostly more headroom on the analog ports, 192 KHz capture/playback and analog protection circuitry for unexpected power surges, all welcome features but not game changing. The 1st gen can be had for $180, a nice $70 price reduction making it a lot of bang for the buck.
Pictured: Scarlett Mixing Software
The Scarlett drivers are straight forward although the device can be used without them but you'll miss out on the analog mixing. After installing the drivers, I rebooted and launched the mixer which immediately updated the firmware of the 6i6, which took mere seconds. No word of what the firmware update did but googling revealed that it improved sample rate switching for OS X (MacOS) users and enables standalone mode so the device can continue routing audio even if the computer isn't turned on, very cool.
The software mixer straight forward, with handy routing presets and input gain control which is useful for the inputs that do not have hardware controls. Any configuration preset can be saved as a snapshot and instantly reloaded, likely more useful for the Scarlett featuring more inputs and outputs but still welcome. Sevearl of the mixer elements also control hardware switches on the device like Input Gain control or line level vs instrument for the front facing. There's a gift and a curse, the hardware is small and compact but it means its entirely driver and support dependent to set the device settings, whereas with previous devices I've owned, line vs instrument gain control was hardware facing. Even with bad drivers, the AudioKontrol functioned as a simple USB input/output device regardless. With any luck though, support will be long in tooth.
Everyone's home studio will look a little different so to give users a chance to contrast and compare, my current setup is as follows:
Computer: Mac Pro 2008 with oodles of upgrades
Monitoring: Vanatoo Transparent Ones with a MartinLogan Dynamo 300 subwoofer, Beyerdynamic DT-990 headphones
Inputs: Numark NS7 Numark Performance Controller (motorized turn table controller with audio output), various Microphones
Midi:Native Instruments Maschine, Korg Padkontrol, Korg Microkey 37, Korg NanoPad, Korg nanoKontrol (all USB)
DAW: Cubase, Logic, Maschine
My mini studio is very hip hop centric, mostly focusing around beat composition. It's not space intensive and uses only a modest amount of hardware and I don't have any real plans of expanding much beyond it. The only real upgrades is probably replacing my Shure SM7b with something a little more forgiving for a wider range of vocals.
Out the gate, I was already happy to simply be able to listen to my headphones or speakers without having to change my audio settings in OS X, even if only an option click away. Swapping between the two was as simple as turning the volume up, this may not seem like a big deal but for all the love Vanatoo get, their speakers annoyingly do not have a front facing volume knob. Also the headphone amp, while some audiophiles scoff at it, is without a doubt reasonably better than my Mac's internal headphone jack that I was reduced to using. At least the Mac Pro headphone jacks aren't pummeled with white noise like the MacBooks. Out the gate, if nothing else more accessible volume knobs and better sound via headphones. I was previously debating a headphone amp for my power hungry DT-990s but they sound better than before and as good as I recall them when I used a Denon mid range receiver as my main headphone amp.
The big hiccup came when trying to get audio to work in Cubase, part of it was user error as I could not get audio to output for the life of me in Cubase and only Cubase. In a moment of inspiration I realized that my ports may not be labeled correctly in the VST panel and noticed that it carried over disabling two outputs. Cubase started showing volume meters for sound but refused to actually output audio. At this point I resorted to a classic audio hack for OS X, create an aggregate device of one in the audio midi setup. For whatever reason, it worked. Annoying? Yes. All other applications functioned normally without this, meaning the issue lies somewhere between Cubase's VST engine and the drivers for the Scarlett.
Recording was easy as ever, there isn't much to say, identifying buses was a charm and recording worked great, noiseless and sounded as rich as it should have for the instrument inputs on the back. The Mic Preamps are notably a little nicer than my Audio Kontrol, simply for the fact it'll accept unbalanced cables. The quality when tested with a Sennheiser e935 without any other preamp was clean and defined, and only required roughly half gain. Comparing it to the Audio Kontrol which wasn't terrible, it seemed just a hair "richer" to use a vague imprecise term. Audio quality is certainly up to professional standards, at least in my book.
The next plus was for the first time, I was able to use live monitoring. With my AudioKontrol, it never worked if it was supposed to. I've always done monitoring via software which has meant delay. Not ideal but it worked. A quick trip to the mixer control panel and the Scarlett worked as expected. I could play my NS7s irregardless if I had a track set to monitor. It's a real benefit over the lower end devices I was using.
After a week in Cubase, there's no noticeable glitches, which Cubase on OS X... macOS... is more prone to than many other audio apps. I'm pleasantly happy with the device.
A slightly different take - S/PDIF
The 6i6 is almost ideal but the S/PDIF coax ports are almost useless for most people. So for anyone asking what S/PDIF (Sony/Phillips Digital Interface Format) is, its the common format developed for transmitting PCM audio or compressed formats such as AC3 (Dolby Digital) or DTS via 75 Ohm Coax (RCA) cables or Toslink (Optical). Toslink over time became the much preferred format, likely for the "cool" factor, and optical cables require no shielding as RF noise does not affect light, thus cables are lightweight and small. S/PDIF can be found on many home theater receivers, some standalone CD players, most DVD players, some Blu-Ray players and in the professional world, DAT systems.
S/PDIF is so ubiquitous that my Mac Pro has I/O via S/PDIF optical and most Macs (MacBook Pros, iMacs) can output S/PDIF optical with a specialized mini-toslink cable. Digital Coax is a fading format, limited to DAT and some CD/DVD players. Outside of DAT, most formats that use S/PDIF can be transferred directly like optical media (CD/DVDs) from an optical drive bay and thus, its mostly used as a way to transmit out from a computer to a receiver or speaker system. I'm not sure about user stats but coax S/PDIF really strikes me as not very useful. I'd much prefer another set of instrument inputs for S/PDIF, a 6i4 (6 analog inputs) would be more useful, I'm guessing most studio musicians would be in the same boat. At the very least, Toslink would be much preferred as there's a much greater chance someone has a speaker system or receiver that uses it.
The other negative is I still don't know why Cubase has a problem with the Scarlett. I've used 3 other boxes over-the-years and never required any workarounds. It works but it strikes as a precarious position as I'm not sure if I'm a DAW update or OS update away from it not working with Cubase but as of writing this it does under OS X 10.11.5. As Mac Cubase user, I'm in the minority and Logic X works fine with it.
- Build quality
- Easy to use device mixer software
- 6 outputs linked to the "Monitor" audio bus alone, meaning two separate headphone amps and external speakers with all independent volume controls
- low latency for USB
- Mild driver issues with Cubase, works fine with workaround.
- Coax S/PDIF really could be swapped for more useful ports. It's best to think of this as a 4 input device
Focusrite Scarlet functions as my headphone amp, alt route to speakers and speaker stand.
I recently purchased a Focusrite Scarlet 6i6 but quickly ran into problems with Cubase 8 and OS X 10.11.5, the device worked with all other audio apps which made Cubase's VST system the culprit. For whatever reason, getting my Scarlet 6i6 to work required creating an aggregate device which still mystifies me. Aggregate devices in OS X allow you to combine multiple audio I/O sources into one virtual device for use in applications, regardless if the application in question has support for things like multiple audio I/Os or even something mundane like audio input from device A and playback on device.
Step 1: In Audio Midi Setup, create an aggregated device
Step 2: Select the Focusrite, no other inputs/outputs are needed
Step 3: In Cubase, under devices, select Device Setup and set your device to the newly created aggregate device
Confirm all the ports are enabled and labeled in a sensible fashion
Under devices, select VST Connections and set the output to the monitor output
Under inputs in the VST connections, create any necessary buses needed for input and/or assign the ports as needed.
While I usually don't write much about client specific work about my job, I recently was hit with a minor problem: Slide shows are terrible for accessibility. After scratching my head for a few minutes I was hit with a dead simple solution, simulate mouse clicks with JS. Check out the codepen below.
See the Pen Tabindex return fix
- Step 1: run
npm updatefrom the grunt directory with your package.json. This should fetch the latest dependencies that are specified in your package.json file.
- Step 2: Likely if you try to run grunt, you'll experience a binding error that reads as follows:
Node Sass could not find a binding for your current environment: OS X 64-bit with Node.js 4.x. Usually this error is followed by the suggestion to of using rebuild node sass.
npm rebuild node-sass
- Step 3: If you're still receiving sass build issues: try updating grunt
npm install -g grunt-cliand re-running the above. Also you may need to update Node itself and check your package.json versioning.
- Step 1: run
Now for an official press release:
For Immediate Release: 05/09/16
Sitefinity Logo updated for 2016 under direction of disgruntled front end developer with no connection to Telerik
Portland, Oregon: Today unbeknownst to Telerik corp or anyone affiliated with Telerik or Sitfinity, Greg, an front end web developer with a surely disposition and contempt for french roast coffee released an updated logo for Sitefinity.
"This is the stuff that haunts my dreams. Who deemed this an acceptable or maintainable design pattern? Is there ever going to be a CMS that isn't some shade of terrible?" Greg ranted on his company's slack channel which elicited zero responses.
The new logo should be used in any instance of the old logo and is free for use for all.
About: This press release is 100% serious and real.
It's one thing to post a fix, and its another to explain how I arrived at the fix. As a web developer (and very very long time Mac user), my life is debugging so I have a few more tools to pull from. Hopefully advanced users will be able to follow my logic to arrive at this fix. You can skip past the following to get to the fix but this explanation may help you to troubleshoot not just Mashcine but other apps. I'm not a magician and I do not pretend to be one, and you too with some practice and time can start learning to troubleshoot computer programs.
So recently Maschine stopped working randomly. Having a fair amount of technical prowess I wasn't too worried. I went to my Activity Monitor. For those not familiar, OS X comes with an application bundled in every install called "Activity Monitor", in your applications/utilities folder. Activity Monitor is a GUI (Graphical User Interface) to several UNIX applications that can be accessed in the OS X's terminal. These applications allow you monitor memory / CPU / network usage. As a debugging tool, it's a must as you can see if an activity is slowing your computer down by overly utilizing CPU cycles, ram or bandwidth and force quit tasks that can't be accessed by the normal force quick menu.
I fired up activity monitor and located Maschine in the list and double clicked it to get more info about the precess and to see which files it recently accessed. Even to me, most of what appear in the log files are a garbled mess of esoteric computer speak but I also know that there's important info in these logs. Fortunately files accessed logs are straight forward. In the log, the last files accessed were plist files pertaining to it's freeze state.
As an OS X veteran, I knew offhand a few things: plist files are preference files that sometimes get corrupted, and usually can be deleted without any repercussions as the application in question will simply regenerate these (worst is some settings may be lost) and more importantly that the .saved has to do with OS X's ability to relaunch applications to their last known state. As a rule, Freeze states are pretty much always ok to delete, infact occasionally you need to dump a bad freeze state. This is common practice in iOS when users double tap the home button and swipe up to close a frozen app as iOS doesn't by default "quit" apps, but simply places them into freeze states. Deleting a freeze staet simply forces the application in question to fully relaunch.
I started by dumping the Saved Application state, always an import first step in modern OS X debugging but it didn't work. After talking to my buddy Justin, he mentioned that the time Maschine stopped working when he changed plugins. So I had a hunch and decided to take a memory sample in the activity monitor. Memory samples allow you to peek inside to what your application is doing and what information is being accessed at that moment in time. Remember how I said even I don't understand much of what's in a log file? This is one of those times. We know that the application is hanging so something is causing it to hang, and we can bet that the problem can be "seen" as the program will likely try to repeatedly access something or we may see the last item the application tried to read before stalling out.
(the screen caps are clickable for legibility)
Note that during the hang, the memory sample is calling PSP echo, which is a popular audio plugin. Maschine like many audio applications runs an initial plugin scan to make sure all the plugins installed are compatible and if they aren't they're blacklisted so they will not crash the host application. This scan usually is run only if the application detects a change such as a new plugin install. This sometimes fails, and causes an application to crash. Apple's Logic, has the ability to detect failed launches and thus rescans it's pliugins on a failed launch (a somewhat recent innovation with Logic 8 or 9). Maschine being a little more limited, doesn't have this ability so its up to the user to manually reset the approved plugin cache. While I couldn't find the Maschine 2 location for plugins list, I found the following article MASCHINE Crashes at Startup (OS X) which pointed me in the right direction.
How to fix
Step 1: Go to the following location on your computer:
Users/[your user name]/Library/Application Support/Maschine 2/
Note: You may need to enable your user library folder visibility if you have not done so already.
Step 2: Drag all the files into your trash.
Step 3: Relaunch application.
With any luck you should see something like the message above. Happy Beat Making and troubleshooting! Remember, the activity monitor is one of the most important tools in a power user's bag of tricks. OS X is big and complex but almost nothing is done behind closed doors, this means there's almost always a way to get to the root of a problem.
I'm a big Operator Mono fan. A few months ago I wrote how to Set up Operator Mono for Atom. It involves a bit of style sheet hacking. Coda is pretty straight forward but I realized after roughly 8 years of owning Coda, I've never messed with the font formatting.
Step 1: Set the font in preference under Editor
Click the Editor Font and locate Operator, select the weight you're most comfortable with.
Step 2: Setting up the italics character set
One of the best features of Operator Mono is that all its italics are an alternate character set, useful for programming. Coda doesn't pack in a style sheet akin to Atom or Sublime, which is a mixed blessing. It's pretty easy to set up but requires a little more handy work.
- Click Colors in Preferences
- Within the colors panel scroll area click on the various code examples and click the bold/italics to check boxes to change your code styles
To mimic Atom's settings and the examples on the Operator Mono website, I recommend italicizing the following: all comments, tags, variables, attribute names, and leave CSS unitalicized. That's it!
While admittedly this isn't the most scientific test, I ran this using the current versions of gulp and grunt with their respective plugins using a rather large project build on a heavily modified version of Bootstrap 3. The end result is roughly 9700 lines without minification, and a 160k CSS file minified. It's big but it isn't massive either.
- Total 586ms
- Total 5.3s
- Total 388ms
- Total 4.75s
The configuration looks as follows:
Libsass -> autoprefixer -> minification.
In the grunt task, I have a watch task that triggers libass to grunt-autoprefixer to grunt-contrib-cssmin. I replaced the prefixing/cssmin with grunt-postcss.
For Gulp the task was nearly identical, libass to gulp-autoprefixer to gulp-autoprefixer. I replaced the prefixing/cssmin with grunt-postcss. The end result is pretty much the same.
What does this mean?
The takeaway is that previously modules that PostCSS replicates are considerably slower but (and I'll use bold to stress this) this does not mean you should not use PostCSS. PostCSS still has some seriously potential if you're into eschewing the pre-processor for CSSNext or looking to use CSS Modules. However, unless you need PostCSS, you shouldn't feel obligated to replace current working tasks with the CSSnext version.
The Playstation 4 is quite the capable device, unsurprisingly able to run linux. I recently bought a PS4 and in true developer spirit immediately began poking around the browser. To my knowledge, there's next to zero developer documentation. The best I could find was a single PDF from Sony which appears to be dated. My goal is to document what's known about the PS4's Browser.
My PS4 test setup
Gecko? Mozilla? Netfront?
Google's whatbrowser gives an error.
whatbrowseramiusing.co reads the Gecko-Like User-Agent string.
whatismybrowser.com likely matches UA string by closest match, and returns Mozilla.
whatsmybrowser.org correctly identifies the PS4 as NetFront.
Netfront is a proprietary web browser used for the Playstation 3, Playstation Vita,PSP, Nintendo 3DS, Wii U and Kindle E-reader. The original Netfront Browser has since been replaced by a webkit powered NetFront NX which appears to power the PS4.
- HTML 4.01, XHTML 1.1, XHTML Basic 1.1, CE-HTML, XML 1.1, RSS feed (RSS 0.9/0.91/0.92/1.0/2.0, Atom 1.0)
- HTML 5 Support: Canvas, Canvas Text, localStorage, sessionStorage, Web Workers, applicationCache, HTML5 Input types (partial) - Notable missing: Geolocation API, HTML5 Input input attributes, picture element, srcset, service workers, web components
- CSS3 (Flexbox, full CSS3 selector support Media Queries, Animations, 2D/3D Transforms, Transitions, etc.) - Notable missing: Multiple background support
- CSS1, CSS2.1
- DOM LEVEL 2
PS4 passes all the CSS3.info's select test
The PS4 scores relatively well on the CSS3 test (Chrome v 49.0.2623.112 scores 52%, Safari 9.1 54% and FireFox 45 63%)
- TLS1.2 *no compression
- Configurable digital certificates
- Extended Validation
- Elliptic Curve Cryptography
- No SSL support (2/3)
PS4's webGL error
- Image Formats: JPG / GIF / PNG / BMP (32 bit + compress supported)
- Note: TIFF image format is unsupported (commonly supported in webkit).
- Container: Mp4/HLS
- Codec H.264
- Profile: Baseline/Main/High
- Level: 4.1 or lower
- Resolution: 1920 x 1080 or lower
- Framerate: 60 fps or lower
- Bitrate: 20 Mbps or lower
- Autoplay: supported
- Formats: AAC (LC or HE-AAC v1)
- Channels: 1 channel, 2 channels, 6 channels (AAC-LC only), or 7.1 channels (AAC-LC only)
- Sampling rate: 8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, or 48000 Hz
- Bitrate: 48 to 3456kbps
- MP3/WAV/AIFF/AU/MIDI unsupported
- Audio playback using the audio element is not supported.
- Direct links to audio is not supportered
- PDF: unsupported
- Downloads: unsupported
Sunspider 1.0.2 test: Overall score: 3203 ms using remote login (and a game left open)
PS4's fails to load html5test.com...
Notably some previous users have completed the HTML5 tests, you can see the scores: here.
PS4's fails during Octane test...
Not pictured: Jetstream
The PS4's weakest link appears to be modern JS support. Unsurprisingly the PS4 isn't a strong performer. In a very uncontrolled environment with 3 concurrent browsers open, and roughly 25 tabs, and several apps, my MacBook Pro in chrome scored a 157.8ms vs the PS4's 2929ms (Lower is better). An iPhone 6 scores roughly 326.6 ms. The big differences here are that both Chrome, Safari and FireFox use highly optimized JS engines, V8, NitroJS and SpiderMonkey respectively. However, other users report much better SunSpider benchmarks clocking around 1027ms which places roughly at the performance of an iPhone 5 when it comes to JS. While the PS4 certainly has room for improvement but is unlikely to see massive gains as I highly doubt most people spend much time in the browser outside of gaming.
More to come...
Stay tuned, I plan to update this over time. Testing the PS4 is tedious as the remote support doesn't allow text input via keyboard.
Planned tests: weinre remote debugging, FireBug Lite, BrowserSync
Anyone with better documentation or more information please feel free to e-mail at: firstname.lastname@example.org. Thanks!
The Obama administration took a seat in the encryption debate, even in light of What's app rolling out end-to-end encryption for its billion users. Worse yet, the leaked senate bill is soft on security, high on fear. At least there's a few out there fighting the good fight.
It's almost if there's real world analogies to draw from.
Palmer continues to clarify what he meant by that blunt statement by saying “It just boils down to the fact that Apple doesn’t prioritize high-end GPUs. You can buy a $6,000 Mac Pro with the top of the line AMD FirePro D700, and it still doesn’t match our recommended specs. So if they prioritize higher-end GPUs like they used to for a while back in the day, we’d love to support Mac. But right now, there’s just not a single machine out there that supports it.”
The only Macs capable of handling the Oculus Rift built by Apple are 3 years out of production Mac Pros... or Hackintoshes. Those of us with either still have dual booting, so there's always that. I've come to the conclusion the 2013 Mac Pro's trashcan looks are a metaphor for what Apple thinks of professionals.
The message "Not Delivered" has eluded me longer that it should have after retiring my MacBook Pro 2012 Retina for a 2015 Retina. I was able to message anyone using an iOS device through messages but unable to send SMS to non-iOS phones. Here's the fix, it requires both your iPhone and your Mac to be handy.
Confirm your messages has been configured on your Mac in messages preferences
Go to Settings -> Messages and then finally to Text Message Forwarding
Add your computer:
A confirmation code will be sent to your Mac in Messsages, it may take a few seconds. Be ready to punch in the pin on your iphone.
subscribe via RSS