admin's blog

Slow downloads for Apple App Store updates? Try this.

Sun, 2016-07-03 18:47 -- admin

I discovered a nifty trick for those excruciatingly slow Apple Update downloads. My friend left his broken 2008 Mac Pro with me last week. It turned out that he just neglected to plug in the display power supply. The computer has been down for a year(!) because of that.

Since the machine hadn't run since last summer, he was still using Yosemite so I decided to do his free upgrade to El Capitan because obviously that was going to be over his head too.

I started the update and System Update reported that the download would take 3 days, 2 hours. WTF?? I did the same update on my Mac many months ago and remember it taking like 15 minutes. I have a 100Mb connection here.

I kept pausing/restarting the download and the download time didn't budge. Obviously Apple's content delivery network was bricked.

Then it occurred to me that CDNs are usually regional. A user in Australia will be sent to a different CDN node than a user in Italy. Maybe it was just the US node that was farkled.

So I surfed for public DNS servers and found this list:http://www.bestdns.org/.

I went to System Preferences -> Network -> Advanced and deleted all the DNS servers. so there would be no chance for a timeout failover to the next DNS server on the list. Then I added one for the UK that I got from that list above: 217.174.248.125. I wanted an English speaking country just in case Apple defaults the update to the local language too. My Cantonese ain't so good.

My three day download took 11 minutes.

Remember to revert your DNS servers back to what you had before you run the update. Or just use Google's: 8.8.8.8 and 4.4.4.4.

Setting up a hybrid Google Apps mail account

Tue, 2015-12-29 13:43 -- admin

I've run my own mail server since, well, the UUCP days. I used to host a lot of mailiing lists so over the past 20+ years I've run Sendmail, Exim, Qmail and Postfix. They're all different but they share one thing in common: unless running mail servers is your hobby, they're not fire-and-forget applications, especially in the high-spam, high-malware, post-Snowden environment today.  Maintaining a mail server is a chore. You walk a fine line between being buried in UCE and blackholing your Uncle Rich.

I stopped running mailing lists several years ago and since then I've wanted to outsource my mail servers to a reliable third party host.  When I first experienced Google Apps I knew that's where I wanted to be. I don't understand what Google is doing but it's the best mail handler I've used.  I rarely get spam in my Gmail but I've never had a false negative that I'm aware of.  On Gmail, you don't have to muck with Baysian filter settings or install RBLs.  It just works.

The problem is that I create a different email address for every web site I use so I have well over 700 aliases and Google Apps only supports 30 per user with no options to increase that number.  One alternative is to use Google Groups for aliases but that presents its own set of problems. Then a friend of mine, Jesse, told me about yet another alternative.

What Jesse does is keep the MX for his domain and runs his own mail server. But all his local server does is act like an alias forwarding agent.  When mail arrives for jesse@jessedomain.com it consults its alias database and forwards the mail on to his Google Apps account and one of the restricted number of Google aliases.  Google doesn't have MX for his domain but it's set up to send mail as XX@jessedomain.com.

This is exactly what I wanted for myself and in fact tried a couple of times to get it to work.  It failed because I handed my primary MX to Google Apps. The first part of the trick is not to do that.  Keep your MX or use Google only as a fallback MX.

The question is if mail arrives at my server as steve@somedomain.com how can I forward it to a different steve@somedomain.com on Google Apps?  That's the second trick.

SMB+SSH: Ubuntu server and OSX client

Fri, 2014-06-13 01:04 -- admin

The title above is pretty close to the Google search query I used in vain to find a recipe for tunneling an OSX Samba client to an Ubuntu 14.04 server. Hopefully this post will save someone the hours I spent trying to set this up.

In the end, like so many Unix projects, the answer turned out to be simple. All that's needed is a configured and functioning Unix/Linux Samba and SSH server .   Everything else is on the client side.

I'm not unfamiliar with Samba. I ran it for years between a FreeBSD Unix server and Windows XT workstation. It had its quirks, and still does. When I dumped Windows for a shiny, new Mac Pro in 2009, I switched to NFS. But with each successive OSX upgrade, NFS got flakier to the point where it became useless so I returned to Samba.  But Samba is inherently insecure outside of a trusted LAN so for out-of-office occasions I started using SSHFS. Unfortunately, SSHFS relies on deprecated, third-party software on the OSX side and it was s..l..o..w.  My PHP Storm IDE was grinding through directory refreshes after Git checkouts.

With the release of OSX Mavericks 10.9, Apple announced that it was dumping yet another networking protocol -- it's own greybeard AFP. To replace it, they embraced SMB2. Or... ta da... Samba. Technically, SMB2 isn't officially Samba however OSX has unofficially supported Samba clients for several operating system releases.  Samba(tm) (the Unix server) is actually a product of the open source team at Samba.org.  SMB is an acronym for Server Message Block, which is a proprietary Microsoft protocol. Samba is built to the published white paper spec for SMB.  

Optimizing a Result Set Pager

Sat, 2014-01-04 11:15 -- admin

It's ubiquitous on data driven web sites: the result set pager.  We've all used them whether we built them from scratch or used one provided by the framework.




Pagers are by nature performance suckers because we're asking the database to re-run the same query for each "page", slicing off just one set of contiguous rows for each page. If your result set is 10,000 rows long but you're only paging through them 10 rows at a time, that's potentially 1,000 database requests to view the entire set.

But it's worse than that because in order to provide those nifty pager controls, like those in the image above, the software has to know how many rows are in the larger result set so it can do the math to populate the navigation for those page numbers.  In other words, using the above example the software needs to know that there is a Page 14 to jump to.

A little background first.  Internally, garden variety pagers are pretty much the same.  They request a fixed number of rows to display per page, like 10.  That becomes your LIMIT filter in the database query:

SELECT * FROM people LIMIT 10;

To create the page navigation you need to do some math to generate the OFFSET.  For instance, using a page size of 10, the query for a Page 3 display would look like this:

SELECT * FROM people OFFSET 20 LIMIT 10;

Remote SSH Filesystems on OSX

Tue, 2013-10-29 14:33 -- admin

Developers, particularly web developers, have a need to work on external computers, often not within their local networks.  Over the years I've employed everything from FTP to SFTP/SCP to Samba to NFS to VPNs to cranky Novell networks.  All have their downsides, particularly with regard to security.

I have a MacPro and originally ran NFS to connect to machines on my LAN.  But as Apple released new versions of OSX it became more hostile to NFS, to the point where it because unusable with my Ubuntu-hosted web server.  I retreated back to Samba but it was always a PITA because every time I rebooted the Mac I had to manually remount those network shares.   Half the time they wouldn't appear in Finder so I'd have to do it again.

When I got my new MacBook I decided to spend some extracurricular time sorting out this problem.  My research led me to OSXFUSE.   OSXFUSE is a library that allows foreign filesystems to integrate with OSX's own.  One of those is SSHFS, a GitHub project that allows foreign filesystems to be mounted over a secure socket layer.  This sounded exactly like what I wanted.  There was virtually no setup required on the host other than a functioning SSH account.   While I doubted that it would be a particularly fast filesystem, I'm not streaming media with it, mostly just pushing files through my programming editor which unfortunately lacks SFTP support of its own.

Holiday fun: Designing a stained glass Christmas tree with GlassEye 2000

Sat, 2013-01-12 17:07 -- admin

One of my hobbies is constructing stained glass, which is something I got into out of necessity while restoring an old house in Brooklyn.  The cost for replacing or, worse, restoring old stained glass panels was frightening enough that I took some classes to learn how to do it myself.  Fortunately, I learned that working with stained glass is somewhat the same as woodwork joinery so the transition wasn't too difficult once I learned the tools and their tricky techniques.

However, Rembrandt I ain't. I can visualize things pretty well but there's a bridge out somewhere between my left and right brain. With woodworking, I usually wind up head jamming the fabrication. It works 90% of the time. The other 10% is handled by my hard-won skills in making dumb mistakes look like I meant to do that. But this ad hoc process doesn't work for stained glass construction, where you need to have a completed design and pieces cut before you start soldering things together.

Database Meets Revision Control

Thu, 2011-12-01 14:27 -- admin

Any developer who has worked with HIPAA compliancy knows that the law is murky at best and the fed doesn't publish a programmers' guide to make your life any easier.  However, one of the cardinal rules is the requirement to keep track of who sees HIPAA data, who modifies it and when this was done.  Another is that if you delete/update patient data you need to log what was deleted/updated in order to provide an audit trail, if only for the lawyers.  Failure to do so can subject a company to some pretty draconian penalties.

This creates a challenge on the database side because SQL UPDATE obliterates a record's history.  There are a few potential solutions, such as maintaining a changelog which such updates are written based on table triggers.   I've done this but the log of atomic changes can grow immense.  It's also difficult to reconstruct a large record based on potentially dozens or even hundreds of changes to records which must be retained for up to six years. That's how a traditional RCS would handle rollbacks but it's not practical inside the confines of a database.

Nevertheless, a resource control system (RCS) approach is what's needed, where a SQL UPDATE would maintain a copy of the pre-updated record and freeze it from further changes.  RCS does its work by storing just the changes, or diffs, made to a document.  While it would be technically possible to do this with a database record -- for instance, using a BLOB in a sibling table -- there's a simpler and more practical method that also maintains relational integrity.

Finding duplicate records in a database: the SQL HAVING clause

Sat, 2011-10-01 15:55 -- admin

One issue I run across occasionally is a table with duplicate entries such as two entries for the same company in an accounts payable system.  This can create embarrassing problems with billing if ACME Inc #1 is 90 days overdue because someone posted a payment, and now a credit, to ACME Inc #2.

Pages

Subscribe to RSS - admin's blog