I’m sure there are a bunch of really excellent use cases for the new focus mode in iOS but this one that I created yesterday is a total game-changer for me. On so many occasions I’ve been playing guitar or recording something in garage band and had a phone call or text interrupt the recording session and it is really a challenge to remember to put on do not disturb each time I launch a recording session in GB.
Enter focus mode! I created a focus mode that turns off all alerts/interuptions called “Recording” and set it so that the mode is engaged every time I launch Garage Band. Focus mode is pretty intelligent and automatically puts all of my iOS devices in “Recording” mode as soon as I launch Garage Band. This is awesome and I highly recommend it!
This is still in its infancy but it’s fantastic to see AirPlay 2 support making its way into Shairport/Linux. I haven’t tested this yet as I’ve been buying old AirPort Express (amazingly, they support AirPlay 2) devices to replace my Raspberry Pi’s but it’ll be fun to pull out a raspberry pi and try to get synchronized audio playing out on my patio speakers.
I love my iPad Pro for recording. My current workflow involves laying tracks into GarageBand and then either exporting the stems out to a Google Drive where one of my band mates adds them to our band’s Reaper projects or I dump the project to my Mac and play around with the mix in Logic.
But the best part about the iPad Pro as the nucleus of my recording setup is this:
This kind of flexibility can’t be beat. I’ve made and recorded more music over the past two weeks with this setup than I’ve made in years. Some of the motivation to record surely comes from Mirror Sound and just thumbing through those pages gave me all sorts of setup ideas.
Being able to just use a USB dongle and a midi keyboard/headphone setup for moving from room to room is very lightweight/portable setup. I am also very happy with my Akai MPK mini even if I don’t totally know how to use it to its full potential.
Using the iPad Pro, I can also plug in an audio interface and do the full microphone recording setup in my office is great. I’ve got a good template setup in GarageBand for the two mic multitrack setup so as soon as I have something to record I can just pop down on my stool, start a new project based on the template and hit record. Takes about 1 minute to get setup and it sounds really damn good.
That said, I’m hoping that Santa brings the new audio interface that I’ve asked for so I can stop using my Zoom H4N as that minute of setup time is due almost solely to how long it takes the zoom to connect as an audio interface. That, and I still haven’t found a really stable USB hub that allows me to run the interface, power and my midi keyboard concurrently in my iPad. But them’s first-world problems.
Signed up. It’s a no-brainer for my family as we were already on the 2TB cloud storage plan.
It’s got Ted Lasso. Enough said.
kidding. Still AppleTV+ has some terrific stuff on it. And Ted Lasso.
I’ve hated Apple News ever since it started asking me if I wanted to open RSS feeds in it but it seems to have stopped doing that maybe? And I’m curious to see how the audio version of news works. So, verdict is out on News.
Maybe the worst music service, ever. But I sync my Spotify lists over to Apple Music and can play them on my HomePods so, there’s that. We had Spotify Family. I think I’m the only one who won’t fully make the jump to Apple Music so I kept the individual Spotify plan for myself and we’ll see how the rest of the family makes the transition. I don’t think they’ll miss Spotify like I would.
I don’t know what it is and am not likely going to find out.
Kinda excited to see how this pans out. I use Zwift right now but everything about Zwift’s bluetooth/hardware integration feels janky. Hoping Apple can do this better.
For the past several months I’ve been using Roon audio server to handle my home hi-fi listening. It’s a slightly pricey subscription model for what I use it for and I’m actively looking for an alternative. Basically, I just want a box that holds a bunch of lossless audio files and serves them up to my raspberry hi-fi/pi with a DAC on it.
Being able to control the playback through an iOS app is a must. So I’m on Roon and re-ripping a bunch of my CDs to lossless (opportunity provided by quarantine/working from home, an upside).
Besides listening to losslessly-ripped CDs, I am also really, really enjoying matrix recordings of Grateful Dead shows. Occasionally, (well, frequently) when I download matrix recordings, the .flac files are missing good metadata. Applying metadata to the Flac tracks can be a bit tricky so I thought I’d detail my process below.
Once you’ve downloaded a show, you’ll have a folder with a bunch of .flac files and usually a .txt file that contains the show information.
You’re going to want to “tag” your Flac files using the information contained in that .txt file. There are a handful of Mac apps that do meta-tagging on audio files but I use one called xACT.
X Audio Compression Toolkit does a zillion things but the one thing it does that nothing else seems to do is take a text file of song information and apply it sequentially to a bunch of audio files.
So, if you’re great-sounding flac matrix recording files are missing metadata, here’s how you fix that problem, easily, in xACT.
Open the app and hit the “tags” tab.
Load the Flac files into the listing on the left side of screen.
Next, open the .txt file that accompanied the flac files and you’ll find a listing of the songs like this:
You’ll want to edit this list to get rid of any line breaks, extra info, etc. I use TextMate to do this and it take about 2 seconds to create this:
The key here is you want exactly as many lines in the file as there are tracks in the xACT window. It will apply each line to the files sequentially. Brilliant. So, highlight the track listing (remember no blank lines!) now, in xACT click the small “Auto-name” box next to the “Title” tag field. This will pull up a window into which you can paste your sequential track names.
Click OK and then “Write Tags” in xACT. Bammo!! There you go.
I also like to add album art, the Venue, etc. and then click “Write Tags” again before uploading the tracks to my Roon Audio player so when I’m done it looks like:
Note that as long as all of the tracks are highlighted on the left you won’t actually see the Track name displayed. You want them all selected when applying Artist, etc. You can click an individual track to confirm that the Track name was applied.
Once I import that show into Roon it looks like this:
My new favorite Keyboard Maestro shortcut allows me to toggle between audio outputs (Built In Output device and my USB output). The Built in volume controls my iMac speakers, the USB output controls my Amp. So, combined with the superb Rogue Amoeba’s SoundSource, I can easily bounce back and forth between controlling my music volume (Roon, Spotify, archive.org) and my Zoom volume using my keyboards volume controls. This sounds like it would be easy and it is, sort of, by using switchaudiosource-osx (available as brew install switchaudio-osx from command line).
Anyway, this is a huge problem solver for me. Stoked. Click pic below for larger version if you want to see the actions.
On facebook the other day, a friend asked for some Grateful Dead live show recording suggestions the other day.
I can’t imagine a show that I wouldn’t listen to but I list some recent favorites down below.
There is always gold to be mined in every show, somewhere.
That said, for the past year I’ve been really enjoying matrix recordings of live shows, where an audio engineer takes a really good Soundboard recording and one or two really good audience recordings and layers them on top of one another.
I go through phases where I really just like audience recordings and sometimes when I like the clarity of really good SBD but matrix recordings are another world entirely, sort of the best of both worlds but greater than the sum the parts if you know what I mean.
Generally I categorize shows into a handful of eras -early w/ Pigpen – shows with just billy on drums-my favs, good Wall of Sound recordings circa ’74, good ’77 shows when jerry was at one of his peaks of creativity and dexterity and then the later stuff-mid 80s onward into the 90s which probably has about 3 sub-categories e.g. with Bruce, etc. but all of these categories are generally meaningless.
Sometimes in the 80s jerry plays like he did in the early 70s. He was really a magical guitar player whose catalog of ideas were like lines of poetry that he would go back to mine again and again and again. But anyway. I have been listening to about 20 or so different matrix recordings this year. Some of them standout, but they’re all standouts to me and I would have loved to have been at any one of these shows. So here’s a little sample of some matrix recordings worth listening to:
If you’ve spent anytime at all on YouTube watching videos of guitarists you know that they range in quality from quick and dirty iPhone videos to more elaborate multi-screen, multi-track presentations.
Until this week and being quarantined and all with the Coronavirus pandemic, I had never really given much thought to posting my own videos or how one might even go about doing so.
Over the past few days, I’ve hit on a pretty good middle-ground between the quick and dirty iPhone video and the more elaborate, high-production quality videos and figured I’d share how I do it.
Here’s an example of a video I recently recorded of Bob Dylan’s Don’t Think Twice, It’s Alright using the setup detailed below.
I’m going to break this HowTo into 3 sections:
Part 1, recording guitar and vocal into GarageBand on iPad using 2 external microphones
Part 2, recording the video component
Part 3, synchronizing the audio and video component and publishing the final product.
Recording guitar and vocal into GarageBand on iPad using 2 external microphones
I record my audio using Garageband. By design I don’t do a lot of tweaking to the default settings. I use the preset “lead vocal” setting for the vocals and the default “nice room” setting for the guitar. I don’t fiddle with the EQ.
Recording into an iPad Pro with external microphones requires some kind of Audio Interface to convert the XLR or 1/4″ inputs of your microphones into USB for the iPad.
I have an older Zoom H4n Pro that does double duty as field recorder and an audio interface (here’s how to set it up). I got lucky, I had no idea it had the audio interface feature when I bought it, but it does.
So I didn’t need to buy an interface. If you need one, for under $200, the Focusrite Scarlett 2i2 is referenced on a lot of websites/videos as being a good safe bet.
Once you have your microphones and XLR cables, here’s a diagram detailing how I get them into the iPad.
Once you have all the hardware hooked up, you can just record into GarageBand.
You can, of course, go down some really deep rabbit holes of which microphones to use, mic placement, eq, etc. Feel free. I’m using a Shure SM-58 and a Sennheiser 609 that I usually use for mic’ing up my amplifier but seems to work just fine for vocals. Very easy to start chasing marginal gains with this kind of stuff. My advice is get it good enough and start recording.
There are a couple of minor settings changes you need to make in GarageBand to do simultaneous multitrack recordings. This guy has a fantastic video. If you’re not super-familiar with GarageBand and don’t want to through your iPad out the window trying to sort out multitrack recording on your own, I highly encourage you to sit through this guy’s tutorial.
Ok, so that’s the audio part! I’ll write up the video and synchronization stuff as soon as I can.
We are on mandatory lock-down here in NJ to help flatten the curve. Yesterday, late in the afternoon I made the mistake of looking at Facebook. I should have known better.
Facebook just seems to bring out the worst in people. So I figured I might try to tilt the scales a little bit more to the positive and record a song and post it. I am going to try to do more of these while we are locked down at home.
I’ve seen couple of blogs (Kirkville,BirchTree) recently opining on stereo HomePod configurations and comparing them to a pair of Sonos speakers. I don’t have a pair of Sonos to compare my stereo HomePod configuration to but my experience with the HomePods stereo pair may be useful for some so I am sharing here.
A few months ago we re-arranged the furniture in several rooms in our house. The net effect was that my Vandersteens (and, as such, my serious listening space) were relocated out of our living room and into a smaller room that has become my now dedicated listening area.
We spend a lot of time in our living room and–as we have large families–often times with a lot of people. I needed a music solution to replace my traditional HiFi and Vandersteen towers that didn’t take up nearly as much space. (Note: I have in-ceilings in the living room but they just don’t sound as good as regular speakers and really don’t fill up the room without creating two very loud areas underneath that make it impossible to carry on a conversation so we never use them in that room).
So when the opportunity came to pick up a second HomePod at a discount (I got mine refurb’d from Apple store but they show up new for $200 on sale on occasion), I decided to try a stereo pair of HomePods in the living room.
For this situation they are absolutely perfect. And by this situation I mean: a large living room area with seating all over the place where you want the music to sound good no matter where you are sitting. The HomePods are amazing at delivering good sound in this environment and I would argue that they are way better than my Vandersteens for this situation.
Sure, where the Vandersteens (or a pair of Sonos) might give you good sounds with great stereo imaging and a convincing sound stage, the eight speakers in the HomePods give you a really diffuse stereo field instead.
Yes, you give up a single sweet spot with vivid imaging. That said, about 85% of the seating options in my living room get a really full stereo sound field where you hear a balanced representation of both the right and left speakers.
The HomePods are strange in this way in that you can be sitting very close to one of the pair but still not sure if what you’re hearing is predominantly coming from the speaker closest to you or the one on the other side of the room.
Moreover, as you move further and further away from the HomePods, the volume of the music does not seem to fall off quite so rapidly. Meaning it’s easier to have a conversation in the room while music is playing and the music volume always seems just about right now matter where your are sitting.
In this way, the HomePods remind me a lot of the Bose 901s. Say what you want about Bose but it is near impossible to beat the experience that pair of 901s delivers to a roomful of people listing to music outside of the dead center stereo imaging position that most speaker pairs mandate.
The HomePods, like the 901s before them, are for social music listening (as opposed to the lone experience of sitting dead center between a pair of towers) and they do a terrific job at that.
1.) This volume roll off is similar to the effect that our Bose L1 with an array of 24 speakers has in our live performances where the music seems to be a pretty constant volume no matter how near/far you are from the tower, it’s uncanny
Setting up my new iMac, I’ve found a nice combination of apps to make listening to radio at my desk a real pleasure.
[update Jan 8, 2020: this setup still works great but I have also purchased the subscription to Triode and also use the Mac desktop version of the app when at my desk]
First up is Triode on my iPhone. A terrific iOS application from a long-time Apple software developer, the iconfactory. On my drive out to Princeton, I lose reception to WBGO* (an NPR jazz radio station out of Newark, NJ) so being able to stream the station over the internet is necessary. Triode makes the whole internet-radio station experience much better by looking up track info and making artwork available, etc.
Paired with Airfoil Satellite (also from a long-time Apple developer, Rogue Amoeba) installed on my iMac, I can send the audio from my iPhone to my iMac as if it were just another Airplay device. I use some combo of Airfoil, Airfoil Satellite and and the Airfoil iOS remote all around my house on Mac hardware of various vintages. It is a fantastic audio utility—my only grudge here is that it doesn’t send stereo output to my pair of HomePods but I think that is an Apple limitation more than an Airfoil limitation.
A few notes:
My iMac sends audio output via 1/8” connector to one of these cool Tripath solid state mini-amps. I love this thing and have a few of them around the house. Sounds great. Easily powers a pair of bookshelf speakers.
I can’t get the WQXR Holiday Station to stream on Triode. The app, thoughtfully, has a way to add a new station manually using the stream’s URL but after spending a while inspecting WNYC’s page code/resources, I can’t find the URL. Would love it if anyone can share the actual URL.
Upgrading my MacBook Pro, the Catalina upgrade hung up on “Setting up computer…” but as I had already looked into that issue for a buddy of mine the other day, I know that’a a widespread problem so I just rebooted and it everything came up fine.
Was a bit of a bummer that Scrivener 2 didn’t make the cut for Catalina and as it’s a 32-bit app and I’m not paying to upgrade I went through and exported all of my Scrivener projects as text files before the upgrade. This, part of a larger plan to try to narrow down the number of buckets/apps I use for writing/notes/etc. So exporting ten or so projects was a bit tedious but I discovered that I have written A TON of stuff over the past six years or so. Way more than I thought I had. And that’s not counting journal writing which lives in Day One.
Other than the loss of Scrivener, I’m noting mostly positives since upgrading:
apps launch so much more quickly under Catalina.
sidecar doesn’t work with the last, best MacBook Pro Apple ever made. Not sure it would have changed my life, but would have been cool.
the Photos.app is really, really good at picking out your best photos. It’s uncanny. I wish there were some way to say “find all my lousy photos so I can just batch delete them.” But maybe that’ll come. For now, it’s great just to scroll through the days or months view and see what iPhoto thinks are my best photos.
Music app is better though since moving my library to the cloud with iTunes Match, I’m noticing some wonkiness with my album covers getting lost. Need to carve out some downtime to clean up my album covers in iTunes. It’ll make browsing what to listen to much more engaging.
iCloud account info under system preferences seems to be much better organized now, especially around Family Sharing type information.
I’ve been seeing references to the latest beta of Spotify on reddit where users report that Siri support now works with Spotify. This will be great if it works well!
I have been loving my stereo homepods but am itching for better automation/shortcuts and Siri integration and have been using my iTunes library to play most of the music but it’ll be great to have access to my Spotify playlists.
I’m running stereo HomePods in our living room. Boy what a difference a second HomePod makes!
Doesn’t sound as good as my beloved Vandersteens which got relocated during a recent furniture rearrangement and frankly the Homepods don’t make for great “active listening” because the imaging is way to fuzzy and nebulous but for just hanging out and getting things done, the sound of stereo HomePods is really excellent.
So naturally I wanted to have access to music to be able to yell at the HomePods and ask Siri “play the album Babylon by Bus.” But I am cheap and already subscribe to Spotify so didn’t want to have to subscribe to Apple Music on top of Spotify. So I dug around a bit and ended up springing for the $25/year iTunes Match service which allows me to take my 30k songs from my iTunes library and upload them to iCloud so they’re available on all my devices (iPhone and HomePods in this case).
It took a lot of BS and fiddling to get this solution to work but ultimately it did and now I can ask Siri to play anything that is in my music library without having to airplay it from my Mac or iPhone.
First I had the difficulty of just getting my 30k songs into the cloud. That took about 3 days and 3 attempts at telling iTunes to “update iCloud Music Library.” But after about 3 days all of the tracks had either “matched” or “uploaded” next them. I presume the latter is when iTunes Store doesn’t have access to the track.
After that, I was able to see all of my music on my iPhone which was pretty cool (tons of live dead shows now!). But still the HomePods would say, “sorry I can’t find that” when I asked them to play something from my library. So I removed them from the “Home” app on my phone and then restarted them. At which point they now seem to be aware of all of the music in my library.
Very cool to be able to say “hey Siri, shuffle some Antonio Carlos Jobim” and have it just work.
A big part of the sound of Kül d’Sack (one of the bands I play in) is our Bose L1 paired with the Bose Tonematch mixer.
The prebaked digital modeling settings for Shure microphones is just great on these Bose devices. The sound quality is uncannily good.
Unfortunately a few gigs back one of the channels on the mixer started exhibiting some static noise. I thought it might have been one of the guitar rigs but over a few weeks the channel noise made it clear that the issue was inside the mixer.
I called Bose, explained the problem and within 90 seconds the customer service rep said he wanted to send me a new unit. I was worried about having to pack mine up and have it repaired. A new unit is much better.
Super-pleased about this whole process and it’s a joy that something that works and sounds so good also has a good company standing behind their product.
I’ve been using Spotify since July, 2011 (when it first became available in the US). It is my go-to streaming service. We’ve had the family plan for years. I use it to work on collaborative playlists with the other musicians with whom I play. I use it when I am learning new songs–being able to hear multiple versions/other artist’s versions of a song is super helpful.
But primarily I use Spotify to discover new music. Spotify’s discovery features are without equal. I’ve become aware of and a fan of more new musicians on Spotify than all the radio or record stores in the world could have ever turned me on to.
From Spotify’s weekly Discover playlist which has an uncanny knack for presenting me with artists I’ve never heard of (though occasionally, too, it is way off base) to its “related” functions that allow you to do really deep dives into obscure genres, Spotify does an amazing job at preventing stagnation in your listening habits.
What this means is that I’m regularly listening to artists who I would have never listened to otherwise. The problem is that Spotify (and, frankly all of the other streaming services) pay these artists squat. That streaming royalties are too low is a given.
But now that Apple seems willing to pay artists more than Spotify, the question is whether or not an unfairly low royalty payment is better than no royalty payment at all? Meaning, if I didn’t discover the artist on Spotify I would never have listened to them at all. I mean, 1% of $1.00 is better than 0% of $10, right?
At issue is the Copyright Royalty Board’s 2018 decision to raise the rate paid to songwriters by 44% over the next five years. Spotify, along with three other streaming services — Amazon, Google and SiriusXM/Pandora — is appealing that decision to the board, a move that has no direct precedent. The four companies have been shellacked with criticism by artists for their action…
Apple, which would also benefit if the rate increase is nullified, is not part of the appeal…
As a sign of how badly the PR war is going, many songwriters are canceling Spotify subscriptions and doing so publicly on social media, where they make sure to note their subscription fees will now be going to Apple Music.
I understand why musicians would want to publicly cancel their Spotify accounts. They are trapped working in an industry that is and always has been horrifically unfair to musicians.
But that said, I’ve been dreading the day that Apple takes off its gloves and reaches into its bottomless pockets in its war with Spotify. I love a lot of Apple’s stuff but, man, Apple Music absolutely sucks. Its interface is shit. Its discovery features are abysmal. I want Spotify to stay around, viable and –importantly–to keep finding new music for me to listen to.
As a musician I’m torn here: go with the company that helps listeners find new music but doesn’t pay those musicians well or go with Apple who pays more but in the end probably pays a smaller universe of musicians because they push the same limited pool of performers to everyone.
For now, I’m sticking with Spotify but will keep exporting my playlists to Apple Music for when Apple drives them out of business.
I like to keep notes about the gigs I play with my various bands. Sometimes I log very detailed entries about changes we need to make to our gear or sound settings for the next gig, other times it’s just a few quick words so I can remember who came out to see us or what riff I need to work on in a given song for the next gig.
Naturally I use Day One to record this information. Last year I started using an iOS Shortcut that I wrote that prompts me for the type of information I want to record about each gig. The shortcut presented me with a list of questions and then combined all of my responses to those questions into a nicely-formatted Day One journal entry.
The problem is that I am not a great Shortcuts writer. I’m lazy so I didn’t add any flow control statements to try to save my responses to the prompt questions as I went along. Meaning, after answering 3 or 4 questions and typing them on my iPhone (which is needless to say tedious) I would occasionally forget about my lame programming skills and try to pause the Shortcut while I go over to facebook or somewhere and download a photo from the gig to add to the entry. Nine times out of 10 I would hit “Done” in the Shortcuts app to do this and in the process I would lose all of the responses I had already typed. Frustrating.
This morning I did just that. Again. I hit Done in Shortcuts while answering the gig prompts in order to go get a photo from facebook and lost all of the details I’d already written last night’s gig. Let me be clear this isn’t Shortcuts fault or Day One’s.
Then I realized I’m totally overthinking this whole need to be prompted bit by Shortcuts and instead trashed my old shortcut and just wrote up this little gem which works just fine and doesn’t have the risk of me screwing it up and losing text. Moral of the story: don’t overthink it! Maybe instead of using Shortcuts to prompt you for a long list of questions, just create a template entry in Shortcuts instead.