Cubase - Error Changing permissions when Installing (for any version of Cubase, Cubase Artist, Cubase Essentials) Fix

    I recently ran into a problem trying to install the latest version of Cubase. Below are a few of the errors I encountered:

    Error changing permissions in 0755 in /System/Library/Extensions/AuthenticationSupport.plugin
    Error changing permissions in 0755 in /System/Library/Extensions/AuthenticationSupport.plugin

    Error changing permissions in 0777 in /System/Library/Extensions/AuthenticationSupport.plugin/Contents.plist
    Error changing permissions in 0777 in /System/Library/Extensions/AuthenticationSupport.plugin/Contents.plist

    Error changing permissions in 077 in /System/Library/Extensions/AuthenticationSupportEnabler.plugin
    Error changing permissions in 0777 in /System/Library/Extensions/AuthenticationSupportEnabler.plugin

    My initial inclination, being a developer, was to use the terminal and sudo chmod the permissions which didn't work. If the previous statement doesn't mean anything to you, chmod is a unix utility that's part of macOS but only accessible by the terminal, that can change the permissions of files (edibility) of files.

    This, of course, did not work as I encountered the same problem when attempting to update drivers on my Mac Pro for a nVidia graphics card.

    macOS post 10.10, features System Integrity Protection, which prevents various system files from being modified by other software. This is a good idea except when it creates a problem like trying to update Cubase from versions. For me, I was upgrading from Cubase Pro 7.5 to Cubase Pro 9.5, but this could happen with Cubase 8, Cubase 8.5, Cubase 9 and the various versions of Cubase like Cubase Artist and Cubase Elements.

    The process is as follows: Disabling the System Integrity Protection, installing the software and re-enabling System Integrity Protection. The steps are outlined in my nVidia post but below are the same instructions.

    Step 1

    Verify you have System Integrity Protection enabled. Go to the terminal (the macOS terminal is located in applications/utility) and type the following command into the window. This should return "enabled".

        csrutil status
      

    Step 2

    Restart your Mac and hold down the Command-R keys during startup to boot into recovery mode. (alternately, hold the option and select the recovery partition). The recovery partition will take longer to boot than normal.

    Step 3

    You should see the macOS installer prompt screen. Ignore it and go to the Utility menu and select the terminal option. Run the following command in the terminal.

    p> Ignore the installer prompt and select from the Utility, Terminal and run:

        csrutil disable
      

    Step 4

    Reboot normally. Install the Cubase software, even if the rest of the software installed successfully.

    Step 5

    Reboot again into Recovery mode and go to the terminal. Run the following to re-enable System Integrity Protection.

        csrutil enable
      

    Now you can reboot normally and start using your software!


    Rise of the backlink spammers

    Recently I've been hit in the past week or so two separate requests to fix broken links on old blog posts, each 4 years old or older. The first is a "Nice try" for for a rather crappy tech blog. Comparitech seems to a form spammer. Comically, the example I found is from the FreeBSD Pipermail mailing list about an archived article from 2002, about VNC portal mail configging. The bot suggests linking to an article explain the difference between VNC and a VPN.

    Ellen Fisher <ellen@comparitechmail.net>
    3:50 AM (9 hours ago)
    Hi Greg,

    I found a link that isn’t working on one of your pages and thought you’d want to know.

    I landed here - http://blog.greggant.com/posts/2013/10/17/53-mac-only-design-development-utilities-apps.html, and noticed you have a link to the Webgraph Facebook Blocker tool (http://webgraph.com/resources/facebookblocker/) which seems to have been discontinued.

    We have a guide to help people stop Facebook tracking them across the web - SPAM URL removed

    If you are updating your page, perhaps you could point people to our guide instead?

    I hope this helps!

    Thanks,
    Ellen
    -----
    Ellen Fisher
    Comparitech

    Yeah, I'm not going to do that. The guide was very so-so, and a bit out of date to boot.

    The second instance is interesting for the persistence, three separately e-mails spaced out. The link in question, was to a website offering a pirated flash version of Plants vs Zombies. As I do not have flash installed, I couldn't comment onto the quality but likely it was advertisement loaded.

    Jessica Bridges <jessycatbridges@gmail.com>

    Mar 15

    to blog
    Hey there,

    Are you able to please update something on your website?

    You were linking to the Plants vs Zombies game on this page of your website - http://blog.greggant.com/posts/page8/
    The link was going to this game - http://www.popcap.com/games/pvz , but I guess since popcap sold PVZ to EA they took the game away....

    Here is a secure working version I found on Google - SPAM URL REMOVED

    Hope it helps! Classic game =)

    ----
    Jessica Bridges
    Digital Artist & Illustrator @ Jess Creative

    The spammer tries to engage again.

    Jessica Bridges <jessycatbridges@gmail.com>
    Hey again,

    I emailed a few days ago about the Plants vs Zombies broken link on your site, wondering if you had the chance to update it yet?

    Don't mean to pester you, just my OCD talking =)

    Best,
    Jess

    Last try...

    Jessica Bridges <jessycatbridges@gmail.com>
    Hey again,

    Last email I promise =) Just wondering if you've received my emails below about the broken link? I don't mean to be a nag, I'm just kind of a nerd for these things =)

    Cheers
    Jess

    My guess is these are bots pre-programmed with to search the bowels of google for links or broken links as an angle to target small websites to correcting URLs as a way to gain standing via backlinking to gain page weight in Google. The Jessica bot is interesting for the follow ups. My theory is it'd spammed me repeatedly even if I had changed the link.


    Finally, a headphone jack that works for the iPhone 7 / 8 - Incipio OX case Review

    Let's just say I'm not a fan of Apple's decision to remove the headphone jack. Rather than recant my entire rant, the long and short is Apple removed the headphone jack to sell it's W1 headphones, knowing the shortcomings of Bluetooth. The W1 headphones provide a better user experience than Bluetooth alone can provide, and Apple has yet to license the W1 technology outside its own Beats headphones. While the iPhone audio isn't "closed", as any Bluetooth headphones will work off the shelf, it has placed Apple/Beats headphones with an advantage. Any argument pro-headphone jack removal has to contend with this reality that Apple is nudging consumers is placing a squeeze on 3rd party headphones, and the headphone jack represented a port that Apple had no way to subjugate. Pundits cheered as the noose tightened.

    Fuze Case vs iPhone
    Pictured: The bulky Fuze case was the first case that offered a headphone jack.

    Since owning the iPhone 7, I've owned several failed products, the most significant let down being the Fuze, a half-baked product that provided a jenky non-MFI headphone jack and questionable battery case. It was bulky. Worse, it just didn't work well. It didn't support headphone controls or headphones with microphones. The battery case required to be powered up and down, and if the case was out of battery, then the headphone port would fail to work. Also, the case occassionally failed to be recognized. The company turned out to be a bit of a scam too, closing up shop only to re-appear as powerpluscases.com, selling the same crappy case.

    My second try was a Veniveta iPhone 7 case, which was simply a bluetooth headphone port stuck to a case. Ironically this half-baked case was far more viable than the Fuze, despite the shortcomings. Again, headphone controls didn't work. The case required independent charging, and its Bluetooth experience was glitchy, often failing to connect the first time I fired it up. I was able to put up with it as it had the same problems as the Fuze, without the bulk and a bit more reliably crappy performance.

    Veniveta case

    Pictured: The veniveta lasted about a year before failing to hold a charge.

    Looming forever, has been the Incipio OX, a case made by a reputable case maker. Every few months since its announcement, I'd e-mail Incipio about the status. Finally, when I went to check on the mythical case, I found it was shipping. I ordered. It's somewhat pricy at $69.99, but I used a 15% off coupon I found with a little google-fu bringing it down to $59.50. The order shipped the day I ordered it (with free shipping) and only took three days to arrive via USPS.

    The Review

    OX case - top
    Pictured: The veniveta lasted about a year before failing to hold a charge.

    The OX is low profile, akin to the sort of cases iPhone users have been used to since it's inception, a rubberized plastic modeled case that fits snuggly to the iPhone. Unlike the Fuze or the Veniveta, it functions as a protective case, provides razor-thin margins to keep the camera lens from protruding beyond the case and a scant millimeter lip around the screen, providing protection from the screen resting on surfaces. It's soft to the touch and reminds me of the official Apple iPhone cases. This will protect your phone and feels as impact resistant as any high-quality low-profile case. It's stylish in the way any case is. Nothing beats the look of an uncased iPhone, but if you're wrapping it up, you won't be visually offended by the Incipio.

    Snapping on the case is pretty simple, and requires little effort, it only requires lining up the lightning port and plugging it in. I was a bit unnerved when I received "Unsupported Device" messages from the case, but I'll get to that in a minute. The volume and power buttons are covered but remain easily accessible and easy to press. Lastly, the case adds a bit of a chin to the iPhone, with two ported sections to project the internal speaker. It's novel as it makes the iPhone speaker directional and more effective.These are the little things that separate Incipio from Indiegogo would-be case makers.

    After plugging the case in, and receiving the device not supported I was worried. I plugged in my headphones, pressed the play/pause button and.... it worked. I then proceeded to plug my phone into my car charger and plug it into my deck. My iPhone was charging AND playing music at the same time. Subsequent case fittings, I haven't seen the message since so I'll chalk it up to user error.

    OX case - bottom

    I tested it with multiple sets of headphones, (Massdrop x NuForce, Symphonized NRG, Klipsch X11is, Beyerdynamic DT-990s & DT-770s) and every last one worked. Pulling out the headphone jack paused the audio as expected. The only minor hiccup is I didn't seem to have discrete volumes for the jack detecting the difference between headphones that included controls vs. standard headphones, something that iPhones with headphone jacks were able to do.

    The audio quality also was the same as the Apple dongle cables which have haunted me the past year and a half, much better than the Fuze which sounded soft and distance or the sometimes gravelliness of the cheap Bluetooth on the Veniveta.

    OX Case vs Apple's case
    Pictured: iPhone 7 with OX case vs iPhone 6 with Apple case. The OX slightly is thinner.

    Final Thoughts

    It took too long to hit the market but THIS IS THE CASE FOR ANYONE WHO WANTS A HEADPHONE JACK ON THEIR IPHONE. It works, and it works well. It's light, well made, oh and it works. After being burned twice now, I've found new harmony in my life. I'm listening to my earbuds and charging my phone as I type this. It's everything that I've missed from the iPhone 6. I just wish I could have had this case for longer. I haven't had a chance to test it with the iPhone 8, but seeing as the iPhone 8 other than the 0.2mm thickness, my gut says yes.

    Right now, as far as I know, it only comes in iPhone 7/8 size and not the plus. The only other game in town is yet-another, IndieGogo campaign, this time by Encased, for their product called the "AudioMod", another bulky battery case with a headphone jack, advertising versions fo the iPhone X and Plus variants. It looks more promising than the faceless brand behind Fuze. Personally, The Incipio is exactly what I want as I'm not fond of battery cases but at least iPhone X and Plus owners can join the party. Here's hoping to that Incipio continues the OX line.

    Price: $69.99

    Incipio OX


    Added HTTPS for Inaudible Discussion

    It's been on my to-do list but as out-of-site, out-of-mind problems go, I hadn't gotten around to it prior. Now I have. There'll be a day or so of a "self signed" security error and after this blog should then be 100% HTTPS friendly.


    On the subject of the Mac Pro 2019...

    "Where I think this whole saga gets very frustrating for a lot of current and potential Mac Pro customers is that Apple is describing a product — a powerful, professional-grade, modular desktop computer — that already exists: it’s the tower-style “cheese grater” Mac Pro. While Apple is working away to reinvent one of the most critical components of a professional user’s workflow, those users are stuck with product choices that may not quite fit." - Nick Heer, Pixel Envy.

    This should be embossed onto Apple's Professional Workflow's HQ. To paraphrase Paul Haddad, just throw some Xeons in a box. This should be easiest product release in Apple's entire lineup. Pros just want a box that can house multiple storage devices, PCIe slots, the latest I/O (even thunderbolt is entirely optional when you have PCIe) and lastly, user serviceable. That's really it. They could literally reuse the case from the Power Macintosh 9600 and we wouldn't care.

    Apple envisioned the 2013 as a Mac that could be carted onto the set of a Hollywood style shoot and edit dailies on the spot with Final Cut Pro X, but conceptualizing it in an entire vacuum. While Apple takes the approach the customer doesn't know what they want, that's true in the consumer market but a massive mistake when you're dealing with professional. They know exactly what the want.

    If you want evidence of the demand for such a mythical device: search 2012 12 core Mac Pro in ebay and try and name another computer. Many cost more than the current 5k iMac, new from Apple.


    Installing a GeForce GTX 1060 / 1070 / 1080 into a Mac Pro 2010/2012

    Years ago, I posted a guide on how to install a GeForce 760 or 770 into a 2008 Mac Pro. I included a fair amount of benchmarks to boot. It's lasted me well over three years and made the jump to a 2010 Mac Pro but I finally pulled the trigger on a 1060. You can install a 10x0 series into a 2008 Mac Pro as well, but this guide specifically focuses on the 201x Mac Pros. The main differences between the two are the PCIe power port positions and the lack of the annoying PCIe bar hanger latch. Upgrading only took me a few short minutes, the longest part of the process was plugging/unplugging all my connected devices. There's hardly any special skills or knowledge needed.

    Before you get started, there are a few things one should be aware of:

    MSI GeForce GTX 1060 6 GB

    1. Both AMD and nVidia make EFI compatible graphics cards that will work on OS X. nVidia cards (GeForce 700 through 1000 series) only require installing the web drivers whereas the Sapphire PULSE Radeon RX 580 8GB is (so far) is the only RX 580 that works without any hacking/flashing.
    2. The nVidia drivers currently require 10.12 Sierra or above to use the 1000 series cards.
    3. The nVidia (nor the AMD RX 580) card will not allow you to see the EFI boot screen with the card plugged in (the screen you see if you hold down the option key and the Apple logo). If this is important, I highly recommend keeping an original card around (or flashed). I personally use an ATI Radeon HD 2600 XT (so old that it's not AMD) that shipped with my 2008 Mac Pro computer since its modified to be fanless but any will do, flashed or factory as long as it can display the Apple logo on boot. You can operate the computer without a card capable of displaying the EFI boot screen. However, you’ll have to manage booting using Start Up Disk in OS X and use the bootcamp tools in Windows to switch boot drives and you will not see any picture until the login screen.
    4. The RX 580 and GTX 1060 are fairly evenly performant but as of writing this, the 1060 is cheaper since any model will suffice, and requires less power and can be found to be significantly quieter in some models.
    5. Modern graphics cards require additional cabling and rarely do the graphics card ship with additional power cables. You'll need to purchase the power cables separately, also, the Mac Pros require mini PCIe to PCIe power cables.
    6. Modern GPUs are quite performant (still) on Mac Pros. A 2010 Mac Pro with a GeForce 1080 eats an iMac 5k alive in GPU tests (unsurprisingly).
    7. Not every GPU port may work with the nVidia drivers depending on the card config. In the case of my GeForce GTX 760, all ports worked sans one of the DVI ports. As a general rule, count on most but not all ports working and do diligent research. The best places to check are MacRumors and TonyMacX86 forums.

    Step 1:

    If you're upgrading from a stock card, you may be unaware that the PCIe bus doesn't deliver enough power thus PCIe power additional cables are required. The Mac Pros include two power ports for PCIe power but use special low profile cabling often referred to "Mini PCIe".

    The Geforce 1060 / 1070 / 1080 require external power. Also, the 1060 requires an 8 pin power cable, the Mac Pro defaults are 6. You'll need a 6 to 8 pin power adapter. I ordered the following: two of the mini PCIe to PCI-e Power Cable (disregard the G5 mislabeling) and a 6 to 8 pin PCIe power adapter, which are much more easily found.

    Cable requirements

    This may differ between card manufacturer, but the following is true for the base models.

    • GTX 1060: 2x mini PCIe to PCI cables, 1x PCIe 6 to 8 pin adapter
    • GTX 1070: 2x mini PCIe to PCI cables, 1x PCIe 6 to 8 pin adapter
    • GTX 1060: 2x mini PCIe to PCI cables, 2x PCIe 6 to 8 pin adapter

    MSI GeForce GTX 1060 6 GB in handThe MSI GTX 1060 is massive, roughly 11 in x 5.5 in x 1.5 in thanks to the oversized cooler.

    Next any off the shelf GeForce GTX 1060 or GTX 1070 or GTX 1080 will do. Personally, I picked up the GTX 1060 MSI Gaming X 6 GB, which is regarded as one of the least noisy cards on the market. With bitty coins wrecking pricing, I just wasn't willing to pay for the 1070. I hope all crypto currency fails so we can go back to normal pricing, but I digress. I paid $355, which isn't great but many of GTX 1060s makes are going for more.

    Step 2:

    Pre-install the nVidia drivers, especially if you do not have a Mac EFI card. TonyMacX86 has a nice handy guide to what version based on OS 10.13 High Sierra or 10.12 Sierra or alternately.

    Plug in your power cables first! The GeForce 1060 is big; it dwarfs my 760. Fortunately, the Mac Pro 2010 / 2012 ports are much easier to access than in a 2008 Mac Pro.

    Mac Pro 2010 PCIe Power cables with PCIe cards

    The low profile mini PCIe power cables are located in the bottom back of the PCIe chamber.

    Step 3:

    Do the usual remove slot thumb screws, remove/move old GPU etc. The Mac Pro 2010/2012s have a very annoying PCIe rail hanger, which requires pressing forcefully away from the PCIe card to unseat the cards and reseat them. Use the bottom-most slot as the card is dual height.

    If you're looking for more information on how to install a PCIe card in a Mac Pro, everymac.com has plenty of information including videos.

     GeForce GTX 1060 6 GB running in Mac OS X Sierra

    Benchmarks

    I haven't spent much time with the card, but I did fire up on OS X Tomb Raider (2013) via Steam. At 2560 x 1440 with all settings maxed (16x Anisotropic filter etc), I managed an average frame rate of 57.6 FPS on a 12x 2.9 GHz 2010 Mac Pro with 32 GB of RAM.

    It's no secret that there's always been a gaming performance gap, macOS sadly scores quite badly compared to its Windows counterpart, so it's only fair to compare Mac to Mac or Windows to Windows and not Mac to Windows when considering the gains. Rather than benchmarking Windows, which isn't my daily driver, I'm more interested in how the GPU affects macOS. Below are my Uniengine v4 benchmarks vs when I ran them against my 2008 Mac Pro. Despite the low marks when compared to running Uniengine in Windows, The Mac Pro 2010 is twice as fast by the benchmarks as my previous setup of a 2008 Mac Pro running a GeForce 760. One of the more fascinating things I learned when trying my hand at a Hackintosh was that the 3rd generation 3770k i7 wasn't quite enough to completely best the over-engineered Mac Pro despite having a faster bus / CPU, but merely matched it. If/when I have more time, I may swap the GPUs to see if the scores are as GPU dependent as they seem.

    Uningine Benchmarks

    OpenGL 2560 x 1440 8xAA FullScreen Quality:Ultra Tessellation: Extreme

    Mac Pro 2010 (Xeon X5670 2x 2.93Ghz) + GeForce GTX 1060 + 32 GB RAM + Samsung 840 750 GB SSD

    FPS: 33.2

    Score: 837

    Min FPS: 7.4

    Max FPS: 72.1

     

    Mac Pro 2008 (Xeon E5462 2x 2.8 Ghz) + GeForce GTX 760 + 14 GB RAM + Samsung 840 750 GB SSD

    FPS: 16.1

    Score: 405

    Min FPS: 5.8

    Max FPS: 37.4

     

    Hackintosh (i7 3770k 3.5 GHz) + GeForce GTX 760 + 16 GB RAM + Samsung 840 750 GB SSD

    FPS: 15.7

    Score: 396

    Min FPS: 6.9

    Max FPS: 37.3

     

    Hackintosh (i7 3770k 3.5 GHz) + GeForce GTX 770 + 16 GB RAM + Samsung 840 750 GB SSD

    FPS: 18.8

    Score: 474

    Min FPS: 7.6

    Max FPS: 47.5

    Mac Pro 2010 GeForce 1060 vs eGPU setups

    I used benchmarks provided by a thread on eGPU.io, credit goes to the forum posters for the comparisons. There aren't any perfect comparisons so here's a run of the GTX 1060 in my Mac Pro 2010 vs Thunderbolt 3 Mac running the considerably better 1070 and an iMac 2011 running a 1060. Depending on perspectives, the eGPUs do quiet well or the Mac Pro 2010 is fairly viable. The big difference in eGPU vs internal.

    OpenGL  1920 x 1080 8xAA FullScreen Quality:Ultra Tessellation: Extreme

    Mac Pro 2010 (Xeon X5670 2x 2.93Ghz) + GeForce GTX 1060 + 32 GB RAM + Samsung 840 750 GB SSD

    Score: 1306

    FPS: 51.5

    Min FPS: 19.3

    Max FPS: 106.5

     

    iMac 2011 27 inch (3.4 GHz) + GTX 1060 6GB

    Score: 1226

    FPS: 48.7

    Min FPS: 8.4

    Max FPS: 96.9

     

    MacBook Pro late 2016 13 inch (2.9 GHz) + MSI GTX 1070 6GB Aero OC

    Score: 1825

    FPS: 72.4

    Min FPS: 9.8

    Max FPS: 138.8

     

    macOS vs Windows

    As previously mentioned, this shouldn't come as any sort of surprise but Windows 10 gaming is still quite a bit of ahead of Apple, although Metal shows promise. As of right now, DX11 is the king regardless of your opinion on it in performance. Windows performs a full 10 FPS faster, or about 24% faster. in the same benchmark with the same settings.

    OpenGL  1920 x 1080 8xAA FullScreen Quality:Ultra Tessellation: Extreme

    macOS 10.12.6

    Score: 1306

    FPS: 51.5

    Min FPS: 19.3

    Max FPS: 106.5

     

    Windows 10, 64 bit, Direct 3D 11

    Score: 1609

    FPS: 63.9

    Min FPS: 21.7

    Max FPS: 135.3

     

    I plan to update the benchmarks in time. I may bring in the GeForce 760 for a reference when I have more time and possibly test in a 2008 Mac Pro in the future.

    Troubleshooting

    It's a good idea for the first boot to keep around an EFI card, as you may have to enable the web drivers. Also, I encountered the error of "Mac nVidia Web Drivers fail to update or cannot remove Kext files" when updating my OS recently; you'll want to follow the instructions I posted to deinstall the drivers if this happens to you.

    Final Thoughts

    Upgrading GPus isn't something I'd normally wax philosophical on, but we're post-golden era for OS X, and the Mac Pro is a relic.

    Ever since nVidia has shipped it's web drivers, gone are the sketchy days of flashing a 6970 and using a rom creator. Installing off-the-shelf GPUs has gone from tribal knowledge to common knowledge for the Mac Pro user since I wrote my "how to" guide for the 760. Ironically, it wasn't until Apple killed upgradability that the dream of off-the-shelf GPUs could be bought without the infamous Apple-tax. I debated even not calling this article a "how to". The down side is despite the EFI compatible ROMs preloaded on the 700+ GeForce cards; they're not EFI boot screen compatible on OSX sadly. The only game in town is macvidcards.com which according to all accounts on MacRumors is a legit source, but I find the idea of hoarding an EFI hack a little irksome. It's hard to complain too much as nVidia has quietly kept the Mac Pro and Hackintosh community happy, self-included. There's no specialized knowledge needed to upgrade your GPU or abnormal risks of a bad firmware flash. The only caveat is you'll want to keep an EFI card around for major OS updates.

    Upgrading the GPU is probably second best thing outside of an SSD to make an old Mac Pro feel young if you desire to run 4k and/or use any sort of motion graphics software, play games etc. It's hard not to recommend upgrading as there's a strong case to be made for removable GPUs. A Mac Pro with armed with a higher end GPU will best even the mighty iMac Pro handedly in GPU related benchmarks.

    eGPUs are viable but not as performant. There's just simply no topping a PCIe card slots although we're probably coming to the end of the Mac Pro era if/when Thunderbolt gets an update. Thunderbolt 3 is fast but still has a lot of room for improvement. It's 40 gigabits 5.1 GB/ is approximately the speed of a PCIe 3.0 4x slot. If/when Thunderbolt gets an upgrade (Thunderbolt 4?) Bumping it up two-fold would bring it to roughly 8x PCIe 3.0 or shy of a 4x PCI 4.0. 8x PCIe currently offers roughly 95-99% of the performance for gaming, even with a GeForce GTX 1080. That said, PCIe 4.0 coming out very soon, and PCIe 5.0 may be only a year and change out, boosting PCIe 16x to a truly mind-boggling 63 GB/s a sec (504 gigabits per second). Thunderbolt won't be catching up PCIe any time soon, but it could be for practical purposes concerning consumer GPUs.

    Also to add to the end of the cheese-grater era is the ever-looming Mac Pro. The word "modular" has been tossed around recently quite a bit to describe the next iteration. The Mac Pro flames have been stoked yet again with the very curious mention in Bloomberg's rumor-filled article Apple is said to plan to move from Intel to own Mac chips. It's highly unlikely Apple has anything in the pipeline that's even near the iMac's i9 configurations but will sport the same Bridge2,1 ARM A10 CPU that's found in the iMac Pro. Also, the new Mac Pros are at least out to 2019 and will be shaped by workflows.

    The Bridge chipsets allow for some truly unexciting features like "Hey Siri" to be always on even when the computer shut down and/or to manage graphical keyboards like the one found in the MacBook Pros.

    My gut feeling is if the iMac Pro is any sort of indicator, the next Mac Pro will be absurdly expensive and my guess is it'll sport less upgradability than the 2006-2012 "Cheese grater" Mac Pros but more than the abysmal 2013 "trash can" Mac Pro. Floating rumors around ARM CPUs seems a step away from modularity but a step closer to iOSifying Macs to annual upgrades, stopping the Hackintosh community and locking users out of OS upgrades after 5 years. I am not optimistic about the future of the Mac Pro or the Macintosh.

    The Mac Pro has been a bit of an outlier. I used a 2008 Mac Pro for 10 years. When I bought it, I was still in a 3-year upgrade cycle, going from G3 -> G4 -> G5. I used my Mac Pro 2008 longer than all three computers combined, and only did I recently replace it with a 2010 Mac Pro. That's a significant reduction in computer sales Apple, to engineer a computer that can be used viably for 10 years and I worry they understand that too well. All for the cash, man...

    For now, Mac users have only three choices: eGPUs, old Mac Pros, and the elusive Hackintosh. Any path will get you serious gains. My guess is the 1000 series is likely the last stop for most cheese grater users as we're at a crossroads: Thunderbolt is almost fast enough for GPUs (and PCIe enclosure are becoming more popular), and Apple may yet give us a modular computer.

    4/2/18 Update

    Some minor proofing and added in a lot more benchmarks. Kids love benchmarks.

    4/5/18 Update

    Final Thoughts ended up long-winded.


    Mac nVidia Web Drivers fail to update or cannot remove Kext files

    With the nVidia graphics cards, in a Mac Pro (for those of us who refuse to let go) or PCIe Thunderbolt brethren, you probably by now are used to updating the drivers with every OS X version. However, sometimes when trying to update the nVidia drivers will give an installation failed after appearing initially to install correctly, ending with generic a "contact manufacturer" error. This error isn't exactly telling the full story, OS X post 10.10 has feature called System Integrity Protection, which protects certain system files from being modified by even the root user, which stops malicious installers/rootkits from tampering with macOS. This error also can adversely sometimes affect no longer used files such as items placed in the "incompatible items" folder, and when the user tries to delete them, will receive a "can't be modified or deleted because it's required by macOS" error message.

    It's very important to understand that you should only do this with installers from a valid source before proceeding, such as directly downloading drivers from nVidia and using its certificate check or to remove offending drivers or files. After performing necessary changes, re-enable System Integrity Protection.

    Step 1

    First to make sure you have System Integrity Protection, go to the terminal and run

        csrutil status
      

    This should return a status of enabled.

    Step 2

    Restart your Mac, and hold down Command-R keys during startup. This should boot your computer into recovery mode (alternately, you may be able to hold option and select the recovery partition). This may take a few minutes to boot.

    Step 3

    Ignore the installer prompt and select from the Utility, Terminal and run:

        csrutil disable
      

    Step 4

    Reboot. Perform the necessary change boot back into recovery mode as before.

        csrutil enable
      

    Reboot. You can now check using the csrutil status to see if the csrutil is working.


    Kite - The Game Release On Steam

    Long time friend, James Treneman published his first game on Steam, Kite. I saw in its earliest stages; it's a labor-of-love, a one-man operation, and it's now a full game. It's damn impressive that one person could make a game by himself, more impressive that it's a full-fledged game harkening back to Smash TV/Zombies Ate My Neighbors, mixing in RPG elements, missions, and pixel art.


    New Old Beginning

    I did something today for the first time in a decade. I ordered a Mac desktop. I've been using my Mac Pro 2008 for one decade, a feat I never realized would have been feasible.

    What am I replacing my 2008 Mac Pro with? After evaluating the options, the iMac Pro was just too expensive for my blood with shelf life and the regular iMac just not as beefy as I'd like, especially in the GPU department. I ended up ordering a used 2010 Westmere Mac Pro, 12-core 2.93 GHz. I don't expect to get the same use out of it as my 2008. Just a year or two until we see if Apple does replace the Mac Pro with a modular computer.

    By the numbers, the 8-year-old Mac Pro 2010 I'll be receiving bests my 2015 2.5 GHz MacBook Retina in most geekbench benchmarks in most scores. It bests even the current round of iMacs (excluding the iMac Pros) CPU performance wise. It'll be performant enough to be a Media PC/server should I choose to replace it in the upcoming years. It still strikes me as absurd that 12 core Mac Pros still hover around the $900-1800 mark depending on configuration. If that doesn't show demand, I don't know what does. Apple needs a modular computer for a certain class of users.

    I've spent a fair amount of time blogging about the Mac Pro. The Mac Pro 2006-2012 remain the high water mark of desktops, the most elegantly designed towers, a refined mix of modularity, ease of access and raw power. Opening up the guts to see the (nearly) wire-free world, with an (almost) screwdriver free experience made cracking open a Mac Pro easier than even the era of the G3/G4 tower famed "Folding door" design. It's the painstaking beauty that really makes one appreciate the industrial design chops of Apple at it's best, features that only are touched a few times over the life of the computer are designed to be pleasant if not down right beautiful. The rare PC case today has a locking door that doesn't require screws. Rarer than that are cases that have sleds for storage. Then there's things that remain unique to the Mac Pro. PC cases still do not have handles or raised feet to this day, have chambered cooling, trays for CPU/RAM, or cable free designs. That's not even touching the aesthetics of the garish and utterly unsightly PC cases that still plague (if not make up the entirety of all) the market.

    The end of the Mac Pro wasn't a surprise. You could see the tide receding with the rather modest and unimpressive 2012 update that failed to bring USB 3.0, SATA 3 and Thunderbolt to the desktop arena. The last embers of hope could be seen dwindling of the mythical creative professional smolder with the release of Final Cut Pro X. Laptops have crept into even the most hell-or-high-water desktop users lives as they caught up to their aging out-of-date in performance. Perhaps that's what killed the Mac Pro: engineering a computer that could last a decade.


    Bootstrap 4 isn't quite what it's cracked up to be...

    Love it or hate it, bootstrap has been a mainstay of front-end development since 2011. I've watched it grow and now, dare I say, flounder.

    Rather than recant the ups and downs of each generation, Bootstrap 3 was wonderful for its simple flexibility. Most of the time, I whittled down Bootstrap to the bare minimums, often using only its grid (modified with my own breakpoints) and in-name-only classes like .btn, as they're lexiconic to bootstrap. Any project, I could rely on like-markup and classes to Bootstrap even if the project was largely not-bootstrap. Bootstrap 3's Sass logic was simple and easy, but bootstrap 4 is silly.

    • Bootstrap 4 now uses Sass includes for breakpoints. Why? I cannot fathom a reason a realistic reason why. This is counter-intuitive. Everything is include hell.
    • Most of the generative sass logic has been abstracted into mixin hell. It's starting to resemble the clusterfuck that is Foundation.
    • The cross-dependency of Sass isn't predictable. Example: If you comment out forms, it will break nav functionality. There's a lot of senseless overhead.
    • The JS is starting to suffer bloat. The collapse.js now is 375 lines, now up from 212 lines. Unminified, the Javascript has ballooned from 69k to 163k.
    • Lite and dark themes are written into the code in such a way, it's not easily abstracted out.
    • While small, some of the icons are inlined SVG images, which means removing if custom icons are used, more senseless payload.

    Bootstrap 3 was the right mix of complexity to return on investment, but Bootstrap 4? I'm starting to think otherwise. So far, there's not enough compelling for Bootstrap . Conversion to REM units is nice as well as opt-in to Flex box. Dropping IE8 is a good move. Glyphicons need to go for accessibility. The overall CSS is smaller. I like that. The hackability though? Less so.


    Bandwidth throttling / simulation in macOS (OS X)

    Often as a developer, you want to simulate the experience of limited bandwidth for people with slower internet connections. Chrome and FireFox have this built into the browser, but it only affects the browser and doesn't provide robust parameters for latency or affect the rest of the experience. Safari doesn't have this, and it's in part to the Network Link Conditioner utility provided as an additional tool.

    To install the Network Link Conditioner, you'll need the following:

    • Apple Developer account (no paid licensing is required)
    • Xcode installed

    Next, go to downloads for Apple Developers and sign in. The Network Link Conditioner utility is packaged in with other utilities. Search for Additional Tools or use one of the links below.

    Network Line Conditioner Pane

    Open up the DMG and install Network Link Conditioner.prefPane by double-clicking it. (Note: in Additional Tools, it'll likely be in the hardware folder)

    Using Network Link Conditioner

    Network Line Conditioner in system prefs

    Open up the system prefs on your computer. Click on Network Link Conditioner and click on/off to toggle it on, and the drop down to use presets. You can create your own with the Manage Profiles.

    Congrats, now you can enjoy slow internet.


    Integrating Node KSS with Gulp

    First I off, I highly recommend reading CSSTricks' Build a Style Guide Straight from Sass, it's a game changer for auto style guide generation. That said, I assume if you're at this page you're already a convert.

    I'm going to assume the following:

    • node-kss is installed in the same directory as your gulpfile
    • node-kss has been set up and is generating a style guide.
    • you have at least very rudimentary understanding of gulp

    If either of the first is untrue, please go to the CSS tricks link as it's a wonderful guide and will get you a working spot >Node-KSS has a gulp repository but its wantonly out of date. I recommend not using it. Fortunately chaining it's pretty easy. First, we need to install gulp-shell in our gulp project.

        npm install --save-dev gulp-shell
      

    Next, we're going to need to require gulp shell in our gulp file, this can vary based on your set up, it may be var or const depending on if you're running ES6 or not or part of a larger declaration:

    ES6

        const shell = require('gulp-shell')
      

    ES5

        var shell = require('gulp-shell')
      

    Next we're going to create in our gulpfile a task to execute the command to run node-kss (note you can run alterations of said command if your configuration is different, kss is not required to be installed in the same place as gulp.)

    gulp.task('kss', shell.task(['./node_modules/.bin/kss --config kss-config.json']));

    Lastly, we now need to reference this task in another task. Below is an example of how I'm using it, I created a watch task called "styleguide", a slightly modified version of my default task. Your task will differ from mine

    gulp.task('styleguide',['serve'], function() {
      // Watch .scss files
      gulp.watch(appDefaults.styleDirectory+'**/*.scss', function(event) {
        console.log('File ' + event.path + ' was ' + event.type + ', running tasks...');
        gulp.run('sass');
        gulp.run('kss');
        });
        gulp.watch(appDefaults.myJavascriptDirectory , function(event) {
          console.log('File ' + event.path + ' was ' + event.type + ', running tasks...');
          gulp.run('scripts');
          gulp.run('compress');
        });
        gulp.watch(appDefaults.watchJavascript).on('change', browserSync.reload);
        gulp.watch(appDefaults.watchHTML).on('change', browserSync.reload);
    });
      

    Note that I applied gulp.run('kss'); after my Sass task has run, this will generate a style guide. Since the style guide generates new HTML on every save, my gulp.watch(appDefaults.watchHTML).on('change', browserSync.reload); is triggered because of my project's directory structure. This is why I created a separate task named "styleguide" as I do not always need my kss task to run, and do not want to interfere with live CSS injection via browserSync. Your needs will vary.


    Gulp Boilerplate

    Every now and again, I remember I have a GitHub account and throw something simple up there. I made a Grunt Boilerplate years ago and finally got around to making one for Gulp. There are a few features I still need to stick in, but I like to have a starting point rather than re-inventing my tasks every project.

    Gulp-Sass-JS-BrowserSync-Boilerplate

    Features all the greatest hits:

    • Sass processing
    • CSS Browser auto-prefixing
    • CSS minification
    • JS Uglify (minification)
    • BrowserSync (Inject CSS changes + follow, reload on JS change)

    This is mostly for my own benefit, but if anyone finds it useful, I'm glad. You can nab it here Gulp-Sass-JS-BrowserSync-Boilerplate


    When Node-Sass fails Installing

    So you're here because bash is outputting some big mess that looks like the following when you tried to install gulp-sass or node-sass via NPM. You've probably updated Node and NPM, switched versions in NVM or HomeBrew and are beating your head while node-sass isn't installing. The issue is likely not in the node or npm version but the package.json.

      > node-sass@0.8.6 install /Users/<path-to-project>/_gulp/node_modules/gulp-sass/node_modules/node-sass
    > node build.js
    
    (node:43004) [DEP0006] DeprecationWarning: child_process: options.customFds option is deprecated. Use options.stdio instead.
      CXX(target) Release/obj.target/binding/binding.o
    In file included from ../binding.cpp:1:
    ../../nan/nan.h:339:13: error: no member named 'New' in 'v8::String'
        return  _NAN_ERROR(v8::Exception::Error, errmsg);
                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ../../nan/nan.h:319:50: note: expanded from macro '_NAN_ERROR'
    # define _NAN_ERROR(fun, errmsg) fun(v8::String::New(errmsg))
                                         ~~~~~~~~~~~~^
    ../../nan/nan.h:343:5: error: no member named 'ThrowException' in namespace 'v8'
        _NAN_THROW_ERROR(v8::Exception::Error, errmsg);
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ../../nan/nan.h:324:11: note: expanded from macro '_NAN_THROW_ERROR'
          v8::ThrowException(_NAN_ERROR(fun, errmsg));                             \
          ~~~~^
    ../../nan/nan.h:343:5: error: no member named 'New' in 'v8::String'
        _NAN_THROW_ERROR(v8::Exception::Error, errmsg);
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ../../nan/nan.h:324:26: note: expanded from macro '_NAN_THROW_ERROR'
          v8::ThrowException(_NAN_ERROR(fun, errmsg));                             \
                             ^~~~~~~~~~~~~~~~~~~~~~~
    ../../nan/nan.h:319:50: note: expanded from macro '_NAN_ERROR'
    # define _NAN_ERROR(fun, errmsg) fun(v8::String::New(errmsg))
                                         ~~~~~~~~~~~~^
    ../../nan/nan.h:348:9: error: no type named 'ThrowException' in namespace 'v8'
        v8::ThrowException(error);
        ~~~~^
    ../../nan/nan.h:355:65: error: no member named 'New' in 'v8::String'
        v8::Local<v8::Value> err = v8::Exception::Error(v8::String::New(msg));
                                                        ~~~~~~~~~~~~^
    ../../nan/nan.h:356:50: error: expected '(' for function-style cast or type construction
        v8::Local<v8::Object> obj = err.As<v8::Object>();
                                           ~~~~~~~~~~^
    ../../nan/nan.h:356:52: error: expected expression
        v8::Local<v8::Object> obj = err.As<v8::Object>();
                                                       ^
    ../../nan/nan.h:357:65: error: too few arguments to function call, expected 2, have 1
        obj->Set(v8::String::New("code"), v8::Int32::New(errorNumber));
                                          ~~~~~~~~~~~~~~            ^
    /Users/<user>/.node-gyp/8.1.2/include/node/v8.h:2764:3: note: 'New' declared here
      static Local<Integer> New(Isolate* isolate, int32_t value);
      ^
    In file included from ../binding.cpp:1:
    ../../nan/nan.h:357:26: error: no member named 'New' in 'v8::String'
        obj->Set(v8::String::New("code"), v8::Int32::New(errorNumber));
                 ~~~~~~~~~~~~^
    ../../nan/nan.h:369:12: error: no member named 'New' in 'v8::String'
        return _NAN_ERROR(v8::Exception::TypeError, errmsg);
               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ../../nan/nan.h:319:50: note: expanded from macro '_NAN_ERROR'
    # define _NAN_ERROR(fun, errmsg) fun(v8::String::New(errmsg))
                                         ~~~~~~~~~~~~^
    ../../nan/nan.h:373:5: error: no member named 'ThrowException' in namespace 'v8'
        _NAN_THROW_ERROR(v8::Exception::TypeError, errmsg);
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ../../nan/nan.h:324:11: note: expanded from macro '_NAN_THROW_ERROR'
          v8::ThrowException(_NAN_ERROR(fun, errmsg));                             \
          ~~~~^
    ../../nan/nan.h:373:5: error: no member named 'New' in 'v8::String'
        _NAN_THROW_ERROR(v8::Exception::TypeError, errmsg);
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ../../nan/nan.h:324:26: note: expanded from macro '_NAN_THROW_ERROR'
          v8::ThrowException(_NAN_ERROR(fun, errmsg));                             \
                             ^~~~~~~~~~~~~~~~~~~~~~~
    ../../nan/nan.h:319:50: note: expanded from macro '_NAN_ERROR'
    # define _NAN_ERROR(fun, errmsg) fun(v8::String::New(errmsg))
                                         ~~~~~~~~~~~~^
    ../../nan/nan.h:377:12: error: no member named 'New' in 'v8::String'
        return _NAN_ERROR(v8::Exception::RangeError, errmsg);
               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ../../nan/nan.h:319:50: note: expanded from macro '_NAN_ERROR'
    # define _NAN_ERROR(fun, errmsg) fun(v8::String::New(errmsg))
                                         ~~~~~~~~~~~~^
    ../../nan/nan.h:381:5: error: no member named 'ThrowException' in namespace 'v8'
        _NAN_THROW_ERROR(v8::Exception::RangeError, errmsg);
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ../../nan/nan.h:324:11: note: expanded from macro '_NAN_THROW_ERROR'
          v8::ThrowException(_NAN_ERROR(fun, errmsg));                             \
          ~~~~^
    ../../nan/nan.h:381:5: error: no member named 'New' in 'v8::String'
        _NAN_THROW_ERROR(v8::Exception::RangeError, errmsg);
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ../../nan/nan.h:324:26: note: expanded from macro '_NAN_THROW_ERROR'
          v8::ThrowException(_NAN_ERROR(fun, errmsg));                             \
                             ^~~~~~~~~~~~~~~~~~~~~~~
    ../../nan/nan.h:319:50: note: expanded from macro '_NAN_ERROR'
    # define _NAN_ERROR(fun, errmsg) fun(v8::String::New(errmsg))
                                         ~~~~~~~~~~~~^
    ../../nan/nan.h:406:13: error: no member named 'smalloc' in namespace 'node'
        , node::smalloc::FreeCallback callback
          ~~~~~~^
    ../../nan/nan.h:141:71: note: expanded from macro 'NAN_INLINE'
    # define NAN_INLINE(declarator) inline __attribute__((always_inline)) declarator
                                                                          ^~~~~~~~~~
    ../../nan/nan.h:416:12: error: no matching function for call to 'New'
        return node::Buffer::New(data, size);
               ^~~~~~~~~~~~~~~~~
    /Users/<user>/.node-gyp/8.1.2/include/node/node_buffer.h:52:40: note: candidate function not viable: no known conversion from 'char *' to 'v8::Isolate *' for 1st argument
    NODE_EXTERN v8::MaybeLocal<v8::Object> New(v8::Isolate* isolate, size_t length);
                                           ^
    /Users/<user>/.node-gyp/8.1.2/include/node/node_buffer.h:55:40: note: candidate function not viable: no known conversion from 'char *' to 'v8::Isolate *' for 1st argument
    NODE_EXTERN v8::MaybeLocal<v8::Object> New(v8::Isolate* isolate,
                                           ^
    /Users/<user>/.node-gyp/8.1.2/include/node/node_buffer.h:67:40: note: candidate function not viable: requires 3 arguments, but 2 were provided
    NODE_EXTERN v8::MaybeLocal<v8::Object> New(v8::Isolate* isolate,
                                           ^
    /Users/<user>/.node-gyp/8.1.2/include/node/node_buffer.h:60:40: note: candidate function not viable: requires 5 arguments, but 2 were provided
    NODE_EXTERN v8::MaybeLocal<v8::Object> New(v8::Isolate* isolate,
                                           ^
    In file included from ../binding.cpp:1:
    ../../nan/nan.h:420:12: error: no matching function for call to 'New'
        return node::Buffer::New(size);
               ^~~~~~~~~~~~~~~~~
    /Users/<user>/.node-gyp/8.1.2/include/node/node_buffer.h:52:40: note: candidate function not viable: requires 2 arguments, but 1 was provided
    NODE_EXTERN v8::MaybeLocal<v8::Object> New(v8::Isolate* isolate, size_t length);
                                           ^
    /Users/<user>/.node-gyp/8.1.2/include/node/node_buffer.h:55:40: note: candidate function not viable: requires at least 2 arguments, but 1 was provided
    NODE_EXTERN v8::MaybeLocal<v8::Object> New(v8::Isolate* isolate,
                                           ^
    /Users/<user>/.node-gyp/8.1.2/include/node/node_buffer.h:67:40: note: candidate function not viable: requires 3 arguments, but 1 was provided
    NODE_EXTERN v8::MaybeLocal<v8::Object> New(v8::Isolate* isolate,
                                           ^
    /Users/<user>/.node-gyp/8.1.2/include/node/node_buffer.h:60:40: note: candidate function not viable: requires 5 arguments, but 1 was provided
    NODE_EXTERN v8::MaybeLocal<v8::Object> New(v8::Isolate* isolate,
                                           ^
    In file included from ../binding.cpp:1:
    ../../nan/nan.h:427:26: error: no member named 'Use' in namespace 'node::Buffer'
        return node::Buffer::Use(data, size);
               ~~~~~~~~~~~~~~^
    fatal error: too many errors emitted, stopping now [-ferror-limit=]
    20 errors generated.
    make: *** [Release/obj.target/binding/binding.o] Error 1
    gyp ERR! build error 
    gyp ERR! stack Error: `make` failed with exit code: 2
    gyp ERR! stack     at ChildProcess.onExit (/Users/<user>/.nvm/versions/node/v8.1.2/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:258:23)
    gyp ERR! stack     at emitTwo (events.js:125:13)
    gyp ERR! stack     at ChildProcess.emit (events.js:213:7)
    gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:197:12)
    gyp ERR! System Darwin 16.7.0
    gyp ERR! command "/Users/<user>/.nvm/versions/node/v8.1.2/bin/node" "/Users/<user>/.nvm/versions/node/v8.1.2/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
    gyp ERR! cwd /Users/<path-to-project>/assets/_gulp/node_modules/gulp-sass/node_modules/node-sass
    gyp ERR! node -v v8.1.2
    gyp ERR! node-gyp -v v3.6.2
    gyp ERR! not ok 
      

    Go to package.json and look at the versions. Most likely the version is locked to a very old version of node-sass or gulp-sass in your project (or the project you're using), switch it's version to something recent, (as of writing this, it is "gulp-sass": "^3.0.0", or "node-sass": "^4.7.2"). Congrats, it'll now install!


    Safari's Autofill needs to be redesigned

    All major browsers have built-in login managers that save and automatically fill in username and password data to make the login experience more seamless. The set of heuristics used to determine which login forms will be autofilled varies by browser, but the basic requirement is that a username and password field be available.

    Login form autofilling in general doesn’t require user interaction; all of the major browsers will autofill the username (often an email address) immediately, regardless of the visibility of the form. Chrome doesn’t autofill the password field until the user clicks or touches anywhere on the page. Other browsers we tested [2] don’t require user interaction to autofill password fields.

    Thus, third-party javascript can retrieve the saved credentials by creating a form with the username and password fields, which will then be autofilled by the login manager.

    Source: freedom-to-tinker.com

    Ironically before the holidays, I had to deal with this from the opposite end as auto-form filling from Safari was filling out hidden fields.

    Consider the following

    • Safari's autofill can fill out more than just username/password.
    • Safari's autofill does not give you the ability to view the stored information in its local database other than site entries.
    • Safari's autofill will fill out visibility: hidden and display: none
    • Safari's autofill does not trigger a DOM event on display code>visibility: hidden and display: none. Safari does allow to query for input:-webkit-autofill but testing for this means super hacky setTimeout and setInverval hacks.
    • Safari does (mostly) respect the HTML5 convention but will ignore autofill off on username or password fields

    This leads to a bizarre world where Safari is egregiously handing out info that can't be vetted.

    Safari Autofill Manager

    Pictured: Safari's autofill manager for non-username/passswords (other), doesn't allow you to see what information its autofilling or edit the values. I found some surprising entries in my Safari autofill manager.

    I had the problem where a donation form was falling our API validation as Safari's autofill was completing hidden form elements without invoking changes and creating scenarios we hadn't previously considered. It took error logging to figure out Safari was the culprit, and a heavy dose of intuition to figure out that it was autofill.

    The solution was to add autofill and disabled but lead me to wonder about the potential abuses of autofill. Apparently, I wasn't the only one.


    ImageOptim vs Squash 2 - Comparing PNG optimization - A Squash 2 review

    For years I've leaned on ImageOptim as my go-to for image optimization. I tend to be a little obsessive, using modern formats (WebP, JPEG 2000) and testing out avant-garde projects like Guetzli by Google. I recently decided to finally try out Squash by Realmac Software.

    Over the years, codecs have improved remarkably, especially in the realm of video: For example: H.261 (1984, 1988) -> MPEG-1 (1988-1991) -> MPEG2 aka H.263 (1996-2015) ->MPEG4 aka H.264 (1999-current) -> High Efficiency Video Coding (HEVC) aka H.265 or MPEG (2015 - current). Each iteration with the ultimate goal of improving video quality with at lower bit rates. This doesn't even cover the other formats, VP8, VP9, Ogg Vorbis, DIVX, 3IVX, Sorenson, Real Media and the many others that occurred the past 30 years which all have had variations of mainstream success. Audio has had a similar vector from LMA4:1, Mpeg, MP2, Mp3, ACC, Ogg, AC3, DTS to name a few.

    However, static images haven't had the wide range of codecs (most formats are lossless proprietary files used by various image editors) and have been almost entirely relegated to five formats, SVG, BMP, PNG, JPEG and GIF for distribution. You may occasionally PSDs or EPS files, or photography formats like DNG or standard-free RAW, but those fall into the same category as video codecs like ProRez, DNxHD, Cineform. These are intermediate formats that require specialized software to view/edit and converted when distributed beyond professional means (sans EPS).

    We're starting to see future image formats like Google with WebP, and Apple with JPEG2000 and HEIC, and Safari allowing inline MP4s to be treated as images but for the past 10 years, much of the action in image compression has been trying to squeeze out ever last single byte out of the existing formats, almost entirely for JPEG and PNG (and SVG but that's a different story) A lot of the slow movement of web formats has to do with the W3c. It took Cisco buying and distributing the Mp4 patent for free to move MP4 to the accepted video formation for Microsoft, Apple, Google, and Mozilla. It may take some similar act of corporate benevolence to bring a successor to JPEG.

    Interestingly though, there's a been a concerted effort to squeeze every bit of optimization out of the existing formats: JPEG has MOZJpeg, Guetzli, JPEGOptim, and Jpegtran. PNG has Zopfil, PNGOUT, OptiPNG, AdvPNG, PNGCrush. These all differ as some are encoders, and some are strictly optimizers but the end game is to extract the most out the formats which often involves trickery to exploit the compression. Both ImageOptim and Squash are GUI front ends that make use of these optimizations to create the best JPEG or PNG per kilobyte possible. These libraries do not come without a penalty, that being CPU cycles. These all can take minutes to execute on larger images, and the longest being Guetzli, a 8 MP image can take around 40 minutes to encode even a 5th generation Core i7. We're probably quickly approaching the end of the law of diminishing returns. If you're using Guetzli, I'd argue it's easier to provide alternative image formats (WebP / JPEG 2000) as opposed to burning hours encoding a hand full of images as you'll get better results for the people who can see them (Safari and Chrome users). The rest, however, are still viable.


    PNG Compression tests

    Settings used: ImageOptim (default)

    • Zopfli
    • PNGOUT
    • OptiPNG
    • AdvPNG
    • Strip PNG meta data
    • Lossless
    • Optimization Level: Insane

    Squash 2

    • More Compressed (slower)

    Test 1: Complex Webpage screenshot

    Kaleidoscope Show differences results: >No differences

    Winner: Squash
    Squash Savings over ImageOptim: 21,939 bytes (21.9K), 1.3%

    Test 2: Simple Webpage screenshot

    Winner: Tie

    Kaleidoscope Show differences results: No differences

    Neither of these is terribly surprising, Squash uses LibPNG and Zopfil, which are open source PNG optimizations. I'm a little surprised that Squash shaved off a few more K. To make sure this wasn't a fluke, I tested another screenshot, 2.9MB (2,880,886 bytes), again Squash 2 won, (1.1 MB) 1,116,796 to (1.1 MB) 1,140,793, for a savings of 23,997 bytes (24k). On very large PNGs, Squash 2 has the advantage. I checking PNGCrush, brought it down 1,126,420 bytes.

    Test 3: Large Photograph

    Kaleidoscope Show differences results: No Differences

    Winner: Squash

    This last test weighs in the most for the favor of Squash, 330,665 bytes is significant, even if only a 6% difference

    The Results...

    While hardly the epitome of comprehensive testing, Squash does provide slightly better PNG compression. That said, ImageOptim is quite good for the sticker price of free. Squash 2 is part of SetApp collection or $15 stand alone. Squash isn't as accomplished in JPEG optimization as ImageOptim but seems to be best PNG GUI utility for OS X. It's surprising too, as ImageOptim offers more options for optimization and the same optimization libraries. You can't really go wrong using either utility.

    Mini Review of Squash

    Squash is essentially a drag and drop no brainer utility, drag images in and Squash does the best. If you've used ImageOptim then you're familiar with it. The big differences between ImageOptim and Squash are mostly cosmetic as both do the same operation. Squash appears to be no faster than ImageOptim nor does it have has as many options. The UI does provide a goofy animation and annoying sound (I killed the sound effects immediately).

    Where Squash won at PNGs, it lost out on lossless JPEG compression. Test routinely showed that ImageOptim shaved off on average about 5% more off JPEGs although individual tests differed wildly.

    Squash 2 is a minimalist utility through and through. Drag images in and it outputs compressed ones. Quite possibly the best thing Squash offers over ImageOptim is one of the most simple, it allows you to create new versions of the file appended with a suffix. ImageOptim overwrites images which can be undesirable.


    Detecting Content Blockers is a losing battle, but you can be smart and ethical when doing so...

    There's been a bit of a cat and mouse game between adblockers/content blockers and advertisers/analytics/trackers. The short answer is you aren't going to defeat them single-handedly. Many of the libraries designed to detect them will fail as they're inevitably blocked once a content blocker is updated to detect them. As someone who once ran a website, that hit 150,000 unique visitors a month funded by advertising, I'm sympathetic the publisher's plight. As a content writer, I value analytics, I use google analytics on this site as it helps me understand what content resonates, what channels people use to find my content and how they consume it. As developer with a touch fo UX, logging and error tracking is extremely helpful. A service like loggly can help me find errors, and design better to catch edge cases that aren't on the "happy path" and make data-driven decisions about a product. However, the advertising industry has perniciously proven they are not to be trusted. There's a reason why as a user I surf with Ghostery/1blocker, block cross-origin cookies (on my desktop, kill all cookies), use a VPN, and disabled flash long before most people to dodge the dreaded forever flash cookie. Privacy matters.

    This is my attempt create an ethical framework around content-blocking from the perspective of a developer/content create/publisher.

    A quick list of observations

    I've assembled a list of facts/observations about content blockers.

    • Adblock/Adblock Plus focus on advertising but not analytics. This could change in the future.
    • 1blocker and Ghostery are particularly good content blockers. Both will block <script> tags from loading, or any onerror codes at the src level
    • Content blockers are not fooled by appending <script> tags via javascript to the DOM.
    • 1blocker and Ghostery will not be removed from the DOM, thus any checks to see if they exist will be true.
    • 1blocker and Ghostery can detect anti-blockers popular scripts and prevent them.
    • Browsers are more aggressively pushing privacy settings, FireFox leading the charge and Safari not far behind.
    • If your website fails to work with one of the popular content blockers working, you are cutting out 20% of audience.

    But I'm a special snowflake!
    Using powers for good

    So as a developer/UX designer you're suddenly faced with a problem. Your website or web app has features that break when content blockers are enabled. You've already made sure that your core functionality isn't tied to anything that will be blocked by content blockers.

    Likely your client or manager will ask "can't you just go around the content blocker?".

    The short answer is "No". You will not forcibly defeat content blockers, and if you try, you're signing up for the unwinnable, all consuming, cat and mouse game. However, you can potentially detect content blockers, rather than defeat them. With a service like Loggly, you can easily check if the _Ltracker var has loaded.

      if (typeof _LTracker === 'undefined' || _LTracker === null) {
        //execute code
      }
      

    Suddenly we're at the ethical precipice as we can do a number of things with this information. I've assembled a list of the ethical paths.

    Ethics of content blocking code

    Most Ethical:

    Website/WebApp's core features work any warnings until user reaches an ancillary feature that may be broken. User is able to complete core functions (consume content, use navigation, submit forms).

    Example: Videos still work. User is able to place orders but 3rd party chat tech support may be broken. User is informed.

      if (typeof _LTracker === 'undefined' || _LTracker === null) {
        //If and only if function on page requires service
        //inform user.
      }
      

    Fairly Ethical:

    User receives warnings on every page, encouraging to whitelist site regardless if functionality is affected.

    Example: User is pestered with a whitelist site message. User is still able perform operations. Videos still work. User is able to place orders. 3rd party live chat tech support may be broken. User is informed.

      if (typeof _LTracker === 'undefined' || _LTracker === null) {
        //display global message.
        //Inform user that analytics are helpful for improving the service
      }
      

    Least Ethical:

    User is blocked from consuming content until site is white listed regardless if functionality is affected.

      if (typeof _LTracker === 'undefined' || _LTracker === null) {
        //display global message.
        //obfuscate content/block content/disable features when error is present.
      }
      

    No Ethical Stance: Site does not attempt to detect any blocked content. Site either functions or does not. This is the majority of websites.

    This model isn't free of problems, its almost entirely from the lens of a non-advertisement supported website, like a campaign site / company site/ ecomm / SaaS. While these sites may contain advertising and tracking, all the aforementioned are either have revenue generated by sales (Sass/Ecomm) or lead generation (Campaign/Company). Websites that are dependent on ad-revenue adhere a different set of ethics and variables.

    Other methods for checking for a script loaded.

    Checking for variable existance is the most fail safe method to see if a script has loaded. While the onerror will not work on an individual scrupt tag, you can write in scripts to the head with the following code. This though comes at a mild expense of code execution and may not work in all scenerios.


    Google PageSpeed Insight lacks commonsense and is becoming irrelevant

    This has been something that has irked me for some time now, and I haven't unloaded a good rant on development in some time. Yesterday I wrote about image bloat and decided to add a few negligible optimizations that I've meant to do for a year or two that resulted in about 8-10k reduction per page. After I enabled HTML and CSS minification on my blog, I skated over to PageSpeed, plugged my URL in and frowned. My newly optimized blog post scored a whopping 70/100. My page is 84.5k (or 68.5K without google analytics).

    Google Pagespeed bein' hyper-judgemental about a 90k page

    For reference, wired.com scores 73 out of 100 on mobile with the total page loading 5.1 megabytes and Newsweek.com scores a god damned 84 out of 100 and loads 7.1 MB!. This is utter and complete stupid bullshit.

    Here in lies the rub: While Google PageSpeed always had a "reach for the stars" mentality but is woefully in out of touch when judging a page's real world performance. A 300k page, even poorly optimized one is going to beat the 3 MB page (average page size of major websites) in load times. In the era of a smart phone data plans: a customer could load 30 poorly optimized pages for one bloated highly optimized 3 MB beast of a page. It's telling that Google stopped developing its "PageSpeed" tool into Chrome and has since relegated to its annoying web-only interface. It's become a tool that would be SEO gurus/experts/snakeoil salespersons use when hired by clients use to hold over developers and provide "recommendations" in CMS websites that do not provide easy vectors for the more avant-garde optimizations like HTML minification (which incidentally tends to save less data than CSS/JS optimization, or less than using HTTP compression).

    PageSpeed says nothing about image formats beyond image scaling (and seems to be mostly tone deaf) to responsive images in reasonable margins of error. You can plug in a 500k PNG that could be served by a 40k JPEG image only to have PageSpeed score not even budge. It won't even blink if you're making an effort to support avant-garde image formats like WebP and JPEG2000 to provide more bang per Kilobyte.

    PageSpeed is also frighteningly javascript unaware. "Oh, you have a BitCoin mining javascript file? Is it minified? Is it uglified? Is it GZ compressed? Yes? THUMBS UP BUDDY! Also, good job on the 'Your flash is out of date malware javascript pop up.'" If you're tricky, write in an obfuscated javascript append script to say, the 460k uncompressed D3 library and Google PageSpeed won't even bother to check.

    Other poor detects revolve around iframes to popular services like YouTube / Vimeo / SoundCloud / CodePen and suggest optimizations based on the iframe content, anathema to the entire principal of CORS.

    There's also zero comment on total requests on the page other than suggesting to concatenate files and create image maps, it'll ding you hard for having multiple CSS imports for Google Fonts, but doesn't give a royal damn if you're making several hundred HTTP requests. (Note: most browsers are limited to 6 requests at a time per domain, and usually cap out at around 17 simultaneous. Each request must filled or 403/404ed to open another request. This says nothing about the limitations of the server either for max clients, more requests = more server stress.)

    Want to measure rendering performance? Forget it. There's no discernable metric about time to paint, or continous painting. Feel free to go nuts with CSS filters and bring a lesser device to its knees, PageSpeed doesn't care as long as your CSS is minified.

    Lastly, it can be wildly inaccuraate. My page is minified HTML and yet PageSpeed's wonderful insight is that I should minify my HTML. Wat. View source on any page on this blog if you don't believe me...

    There's probably a reason why I didn't notice that PageSpeed Insights had been removed in Chrome, as its mostly useless to a savvy front end dev beyond a sanity check. It can be taken that Google Pagespeed isn't a metric of your site vs other websites but rather, you vs yourself. Even that rational falls apart as it doesn't give guidance on recommendations too many factors nor does it put any judgement on data use. Google clearly cares about data use, as its questionable Accelerated Mobile Mobile project (AMP) exists. PageSpeed Insights was a tool of genius, but now it feels like it's past its prime and/or in need of some TLC. Really, what I'm asking for is perspective, and Google Pagespeed Insights doesn't have it.


    This article does not contain any images

    At some point in the past several years, the millions of different possibilities of turning individual pixels into a website coalesced around a singularly recognizable and repeatable form: logo and menu, massive image, and page text distractingly split across columns or separated by even more images, subscription forms, or prompts to read more articles. The web has rapidly become a wholly unpleasant place to read. It isn’t the fault of any singular website, but a sort of collective failing to prioritize readers.

    I don’t know about you, but I’ve become numb to the web’s noise. I know that I need to wait for every article I read to load fully before I click anywhere, lest anything move around as ads are pulled in through very slow scripts from ten different networks. I know that I need to wait a few seconds to cancel the autoplaying video at the top of the page, and a few more seconds to close the request for me to enter my email and receive spam. And I know that I’ll need to scroll down past that gigantic header image to read anything, especially on my phone, where that image probably cost me more to download than anything else on the page.

    Nick Heer, PxlEnvy.com

    This blog post is a bit of a meta-reaction seeing as this is a response to Not Every Article Needs A Picture but it's pretty rare to see any blog or news source post an article without an image, and the ban lays squarely on the cult of the "hero" image. The Hero image was a late web 2.0 design, a celebration of bandwidth and the exploding opportunity in web design, and now is feeling trite, stale images and it's only exacerbated by the Medium.com, Kinjas and every news site imaginable.

    Even the print guys fail this test, newspapers like NY Times do not even follow their own print standard and wedge photos into all their articles. As Wired famously wrote, "The Average Webpage Is Now the Size of the Original Doom" (ironically on a page surpasses the 2.3 MB mark at 3 MB* ), do we really need to tax users more? I feel bad cheating my favorite publishers out of ad-revenue, but even whitelisting sites has me running back to Ghostery as I watch my Mid 2015 MacBook slow down and go into leaf blower mode to simply surf the web. On my phone, I have 1blocker but find myself mostly using RSS to this day as its fast, quick and cuts through the unnecessary pictures. Admittedly, my blog index pages fail the Doom test but it's also loading 20 articles at time (this article viewed by itself is 103k), perhaps I may still yet sneak in another feature.

    *With Ghostery Enabled, Wired.com's article is a much more palatable 937K.
    *With Ghostery Enabled, this article is 97k instead of 102k.

    Installing Composer, Drush 8 and Drupal Console globally via composer on macOS (OS X)

    Install Composer

    Before we install Drush, we need to install globally Composer. Composer is a PHP package manager akin to NPM or Bower.

    curl -sS https://getcomposer.org/installer | php
    mv composer.phar /usr/local/bin/composer

    Next we want to edit our .bash_profile. Go your home folder

    cd ~/

    Create a new .bash_profile, (don't worry, if you have one, this won't overwrite it). We need to add a global entry for Composer.

    touch .bash_profile
    nano .bash_profile

    Add the following to your .bash_profile

    $ export PATH="$HOME/.composer/vendor/bin:$PATH"

    Install Drush

    Now that we have composer installed globally, we can install Drush via composer.

    composer global require drush/drush:dev-master

    Finally, we can select a specific version. For Drupal 8, we want Drush 8.

    $ composer global require drush/drush:8.*

subscribe via RSS