• X-E2S Shots

    I’ve been trying to remember to bring my camera out with me more often, employing little tricks like keeping my X100F on my kitchen counter near my keys. But even when I remember to bring it, I haven’t been shooting at all.

    But for Christmas this year I received a 7artisans 25mm/1.8 lens. This lens is notable for a few reasons:

    1.) My X100 has a fixed lens so in order to use the 7artisans I needed to pull out my older X-E2s which had been gathering dust on my shelf
    2.) It is a 25mm which on the Fuji sensor makes it much close to a 35mm, which is my favorite focal length
    3.) it is a fully manual lens in that it doesn’t auto focus or auto adjust the aperture.
    4.) It is a very inexpensive lens, especially compared with the Fuji line but it has some character to it.

    This XE2S is older and has an older sensor (X-Trans II) than the X100F (X-Trans III) but I think I prefer it over the X100F. I just enjoy shooting with it more. I’m not sure I can explain why it feels different despite such similar bodies and dials, but I definitely like my X-E2s more.

    Anyway, I brought it out last night when we went out to dinner and capture a few snaps in the restaurant:

    I’ve been struggling to find a film simulation that works indoors and has an ok white balance. I think this Kodak Chrome simulation from Ritchie Roesch does the trick:

    Classic Chrome
    Dynamic Range: DR200
    Highlight: -1 (Medium-Soft)
    Shadow: 0 (Standard)
    Color: +1 (Medium-High)
    Sharpness: 0 (Standard)
    Noise Reduction: -2 (Low)
    White Balance: Auto, +2 Red & -2 Blue
    ISO: Auto up to ISO 3200

    Exposure Compensation: 0 to +2/3 (typically)

    I’m going to keep using this one for a while as I’d really like to settle in to a single film sim and really learn it.

    I also grabbed a shot as we stopped for gas, I’d intended to use the Cinestill 800 simulation because of its suitability for nighttime shooting but because I couldn’t quite remember which presets I had assigned, I ended up using more of a Kodak Negative type sim, it still looks cool though:

    Anyway, hope this is the start of me bringing my camera out with me more and remembering to actually shoot with it.


  • Fender TSA Flight Case Parts

    Updated! See below

    The latches on my Fender TSA flight case are all shot. Rather than toss the case, I want to try to fix it. It took me a while to find a number for Fender. (At the time of this post the number for consumer relations at Fender is 1-800-856-9801).

    After a while on hold I spoke with a rep who told me that Fender doesn’t have parts for the case but he referred me to https://www.skbcases.com and they have a few replacement latch options (though not exactly what I need). I reached out to SKB Cases via email with details of what I need replaced, will update when I get a reply if they’re able to help me.

    update: I heard back from customerservice@skbcases.com who asked me for some photos of the case and latches and they’re shipping me out the needed parts to repair the case. Really great outcome here and definitely recommend SKB for their great customer service.


  • What is Nature?

    Over dinner recently we have been talking about how we are living in a world that our bodies and minds are not designed for. This has been a recurring theme for a variety of reasons. I then serendipitously came across this quote:

    The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.”

    Edward O. Wilson

    I’m not sure how applicable that quote is but, maybe? It does feel like things are moving so quickly. It does feel like perhaps in our current state we are not equipped to handle the pace at which we are expected to engage with the world. It does feel like we are eating things that our bodies were never made to eat and working in ways that our bodies were not designed to do for 8-10 hours clips.

    But none of that is to say these things aren’t natural.

    I disagree with the proposition that computers and other technology and processed foods and the pace at which we are living are unnatural.

    Everything is nature.

    We are nature.

    The things we build are nature. I’m not sure there is much wisdom in arbitrarily distinguishing between natural and unnatural.

    By way of example, I was having a conversation with someone who said the processed foods were unnatural. My first response was that, well, they are nature but that doesn’t mean that processed foods are good for you. This in turn got me thinking about the Tao. Specifically, the Tao does not distinguish.

    Instead, what I realized is that processed foods (or AI, or cars, or whatever) are natural and not outside of the Tao. But just because something is not separate from nature doesn’t mean that it serves us.

    Whether something is natural or unnatural is irrelevant. What is important is does X serve us? Answering that question seems to be why the Buddha articulated the Eightfold Path. Not so that we could make false distinctions, but so that we could have a tool set to look at something: a thought, a technology, a practice and determine: does this serve me?

    And ultimately, if taken to its logical conclusion in Buddhism the question does this serve me? really means Does this serve everyone?

    We should be able to look at something and ask that question and realize that objects or technologies or foods or thoughts are always natural but it’s in how we use them that determines whether or not they are serving us that is the important question.


  • Read: The Inside Story of Microsoft’s Partnership With OpenAI

    This article offers a comprehensive exploration of the collaboration between Microsoft and OpenAI, delving into the factors driving this partnership and its implications. Key insights include:

    • Kevin Scott, Microsoft’s CTO, views AI as a tool that empowers non-programmers to code, a perspective shaped by his upbringing in rural poverty. This highlights AI’s potential to democratize technology.
    • How Copilot got its name and why it’s such a fitting name
    • Copilot is the result of Microsoft and OpenAI’s partnership
    • That Microsoft is pinning its future on Copilot and, in turn, OpenAI
    • That the primarily academic and non-business makeup of the OpenAI non-profit board was either 100% correct in firing Altman and we should all be petrified of what is happening there or they were all totally out of their depth in running the board. Only time will tell what is true here.
    • The challenge of getting users to understand that Copilot isn’t perfect, isn’t always right but that doesn’t mean it can’t be very helpful

    I couldn’t put this piece down, in part because it helped me crystalize my understanding of MS’s vision for Copilot. Copilot will be both an enormous shift in how we work day to day but also, so integrated and unobtrusive in our workflow that we won’t really feel how much it’s changed how we’re working together until we look back at pre Copilot days.

    It would have been good to get some alternate viewpoints to the “generative AI is amazing and is going to make the world a better place” and some specific discussion around the dangers. But, the challenge there, is that

    Scott, though, believed in a more optimistic story. At one point, he told me, about seventy per cent of Americans worked in agriculture. Technological advances reduced those labor needs, and today just 1.2 per cent of the workforce farms. But that doesn’t mean there are millions of out-of-work farmers: many such people became truck drivers, or returned to school and became accountants, or found other paths. “Perhaps to a greater extent than any technological revolution preceding it, A.I. could be used to revitalize the American Dream,” Scott has written.

    Scott wanted A.I. to empower the kind of resourceful but digitally unschooled people he’d grown up among. This was a striking argument—one that some technologists would consider willfully naïve, given widespread concerns about A.I.-assisted automation eliminating jobs such as the grocery-store cashier, the factory worker, or the movie extra.

    GitHub employees brainstormed names for the product: Coding Autopilot, Automated Pair Programmer, Programarama Automat. Friedman was an amateur pilot, and he and others felt these names wrongly implied that the tool would do all the work. The tool was more like a co-pilot—someone who joins you in the cockpit and makes suggestions, while occasionally proposing something off base. Usually you listen to a co-pilot; sometimes you ignore him. When Scott heard Friedman’s favored choice for a name—GitHub Copilot—he loved it. “It trains you how to think about it,” he told me. “It perfectly conveys its strengths and weaknesses.”

    Nine years later, the company created Tay, an A.I. chatbot designed to mimic the inflections and preoccupations of a teen-age girl. The chatbot was set up to interact with Twitter users, and almost immediately Tay began posting racist, sexist, and homophobic content, including the statement “Hitler was right.”

    Kevin Scott believes that the discourse around A.I. has been strangely focussed on dystopian scenarios, and has largely ignored its potential to “level the playing field” for people who know what they want computers to do but lack the training to make it happen.

    Then, to add yet another layer of protection, Microsoft started running GPT-4 on hundreds of computers and set them to converse with one another—millions of exchanges apiece—with instructions to get other machines to say something untoward. Each time a new lapse was generated, the meta-prompts and other customizations were adjusted accordingly. Then the process began anew. After months of honing, the result was a version of GPT-4 unique to Microsoft’s needs and attitudes, which invisibly added dozens, sometimes hundreds, of instructions to each user inquiry.

    The Copilot designers also concluded that they needed to encourage users to essentially become hackers—to devise tricks and workarounds to overcome A.I.’s limitations and even unlock some uncanny capacities.

    The dismissed board members, meanwhile, insist that their actions were wise.

    “A.I. is one of the most powerful things humans have ever invented for improving the quality of life of everyone,” Scott said. “But it will take time. It should take time.” He added, “We’ve always tackled super-challenging problems through technology. And so we can either tell ourselves a good story about the future or a bad story about the future—and, whichever one we choose, that’s probably the one that’ll come true.”

    Source: The Inside Story of Microsoft’s Partnership With OpenAI – Charles Duhigg


  • ChatGPT- Home Recording Studio Assistant

    I’m rewiring my recording setup. I have a pretty good idea of what I want to accomplish but decided to give ChatGPT a chance to document my setup and it did surprisingly well. I also wanted to relocate where my stereo compressor sits in the chain so that I could use it as an outboard compressor in Logic. It also did really well with that. It totally fell down on generating a wiring diagram, but these instructions are pretty good:

    Details of Audient iD22, Behringer ADA8200, an outboard stereo compressor, and my iMac:

    1. iMac to Audient iD22:
    • Connection Type: USB
    • Details: Connect the Audient iD22 to your iMac using a USB cable. This connection is used for audio data transfer between the iD22 and your iMac, as well as for powering the iD22 if it does not have an external power source. Note: this is incorrect the id22 does have external power.
    1. Audient iD22 to Behringer ADA8200:
    • Connection Type: ADAT Optical
    • Details: Use an ADAT optical cable to connect the ADAT Out of the ADA8200 to the ADAT In of the iD22. This connection is used to expand the number of inputs available on your iD22 with the preamps from the ADA8200.
    • Note: I’m using the ADA8200 as master clock at 48 kHz. The ID22 has a setting to use the dig input as a master.
    1. Outboard Stereo Compressor to Audient iD22:
    • Connection Type: 1/4″ TRS cables
    • Details:
      • From iD22 to Compressor: Connect one of the line outputs (e.g., Line Output 1) of the iD22 to the left input of the stereo compressor, and another line output (e.g., Line Output 2) to the right input. This setup is used to send audio from the iD22 to the compressor for processing.
      • From Compressor to iD22: Connect the left output of the stereo compressor to one of the line inputs (e.g., Line Input 1) on the iD22, and the right output to another line input (e.g., Line Input 2). This setup is used to return the processed audio from the compressor back into the iD22.
    1. Microphones and Instruments:
    • Connection Type: Various (XLR or 1/4″ TS cables)
    • Details: Connect your microphones and/or instruments to the available inputs on both the iD22 and the ADA8200, depending on where you need them.
    1. Monitors/Headphones:
    • Connection Type: Various (1/4″ TRS, XLR, or RCA, depending on your monitors)
    • Details: Connect your studio monitors and/or headphones to the appropriate outputs on the iD22 for monitoring.

    As a note to self here, I couldn’t get all 8 of the ADA8200’s inputs to appear in the Audient software mixer until I launched Logic, then they appeared. Which was weird.

    And here’s how to use the Outboard Stereo Compressor in Logic:

    To use a stereo outboard compressor during mixing or mastering in Logic Pro, you will need to set up an external effects loop. This setup allows you to route audio from Logic through your outboard compressor and back into Logic for further processing. Here are the specific steps:

    1. Connect the Compressor to Your Audio Interface:
      • Use two 1/4″ TRS cables to connect two line outputs from your Audient iD22 to the stereo inputs of your compressor.
      • Use another two 1/4″ TRS cables to connect the stereo outputs of the compressor back to two line inputs on the iD22.
    2. Configure Logic Pro for External Processing:
      • Open Logic Pro and load your project.
      • Go to the ‘Mix’ menu and choose ‘I/O Labels’. Here, you can label the inputs and outputs you are using for the compressor. This step is optional but helps keep things organized.
    3. Create an Aux Channel for the Compressor:
      • Create a new Aux channel strip in your Logic project.
      • Set the input of this Aux channel to correspond to the line inputs where your compressor is returning its signal. For example, if you’ve connected the compressor’s outputs to inputs 3 and 4 on the iD22, set the Aux input to Input 3-4.
      • Set the output of this Aux channel to the line outputs you’re using to send audio to the compressor. For instance, if you’re sending audio out through outputs 3 and 4, set the Aux output to Output 3-4.
    4. Send Audio to the Compressor:
      • Use the sends on your audio tracks or buses to send audio to the new Aux channel. This routes the audio through the compressor.
      • Adjust the send level to control how much signal is being processed by the compressor.
    5. Monitoring and Adjusting Levels:
      • Monitor the signal level going into and coming out of the compressor to avoid clipping.
      • Use the Aux channel’s fader to control the level of the signal returning from the compressor.
    6. Latency Considerations:
      • Be aware of any latency introduced by this setup. In some cases, you might need to adjust for latency manually in Logic Pro.
      • Logic Pro X offers a feature called ‘Low Latency Mode’ which can be helpful, but you may still need to adjust the timing of the tracks manually if there is noticeable delay.
    7. Mixing with the Compressor:
      • Use the compressor on individual tracks, buses, or the master bus as needed during mixing or mastering.
      • Adjust the compressor settings as you would with a plugin, but remember that these settings are ‘printed’ to the audio in real-time.
    8. Bounce (Export) Your Project:
      • Once you are satisfied with the sound, bounce (export) the mix or mastered track, including the processing from your outboard compressor.

  • Crazy Fingers

    I’ve been ruminating over how the chord changes to Crazy Fingers ever came about. The changes don’t feel like something you’d come at organically outside of some melody.

    Last night my son turned me on to Distorto from the ‘75 studio rehearsals and I now think that Jerry had this really interesting melody in mind that he then layered chords over, Give it a listen if you’re not familiar: https://youtu.be/05M5sjlP2YA

    The ladder-rung like guitar melody on Distoto keeps climbing and descending over and over again (anticipating the eventual lyric melody) and you can make out Bobby’s chords and hear the song taking shape. Very cool.

    I’m wondering if those ladder melodies are really just the arpeggiated notes of the chords? My ear isn’t good enough to tell without my guitar in hand. Must investigate!

    And look, let’s not ignore just how crazy it is to hear JG playing with that tone!


  • dowpy – WordPress Posts to Day One

    I write private journal entries and public journal entries. The public ones get posted here. I had been looking for a way to get those public entries into Day One so then when I’m reviewing the “On this date…” feature of the app, I see not only my private Day One journal entries but also my WordPress posts from that date. First step was to bulk import all of my old WordPress posts (20+ years, crazy).

    Next up was to find some way to make sure that new WordPress posts somehow got auto-imported into my Day One journal. Enter dowpy: a python script that pulls new WordPress entries into Day One.

    I’ve had this script up and running for a few weeks now without error so figure it’s time to share it in case anyone else has a similar use case.

    I run this via cron on my Mac. It grabs the ATOM feed of my website and pulls in new entries into a Day One Journal called, surprise, “WordPress Entries.”

    https://github.com/sjimwillis/dowpy

    For it to work you’ll probably want to:

    • make sure it runs daily
    • make sure to have full entries in your feed and not just excerpts
    • it only pulls in what’s in your feed (so, if you want pages as well as posts in your feed you’ll need to make some changes to the stock WP feed)
    • it only pulls in what’s in your feed, so if you want to backfill older posts to get caught up (I pulled in about 20 years of my older posts), read How to Bulk Import Posts into Day One.

    It does a good enough job at handling links and images, though not perfect. I should also note that Apple really doesn’t want you to use cron. They want you to use launchd instead. I will spin up a config file for launchd when I get some time as that would allow the script to run even if my machine is asleep, which would be nice.


  • Why Federate?

    Over at Dave Winer’s Scripting News, Dave makes the case that federation and interop of social media is maybe the wrong move. I think in some cases it may be. Thinking a lot about why and what I write here, it’s rarely because I want to engage in conversation but because I want to share (knowledge, inspiration, whatever). That’s different than conversation.

    I have built so much on top of RSS and so much of that is due to Dave Winer. I do see ActivityPub as something like a two-way RSS feed, but maybe not every internet destination needs a two-way flow?

    For a news or community site, well, social media interoperability and conversation may be just the ticket. Anyway, Dave’s post is a good one:

    http://scripting.com/2023/12/13.html#a135218


  • Re-listening

    In just a few weeks of self-hosing a Navidrome server that I’ve also made accessible from outside my house, I’ve noticed a significant shift in my listening habits. I’m doing so much more re-listening to albums in my collection than when listening to Apple Music.

    Apple Music often leaves me with a sense of FOMO – there’s always something else or new to listen to, making it rare to listen to the same album twice. Contrast that to my Navidrome box which has about 150 CDs or so. The collection is growing, slowly.

    Each week, I’m enjoying hunting for cheap used CDs to rip that will fill holes in my collection. I’m finding that after I rip a CD and load it up into Navidrome, it appears under the “Recent Albums,” section that I default to. And so I end up listening to those albums a few times a week. It’s a completely different experience to listen to the same few albums over and over throughout the week. I can’t remember ever doing that with Apple Music.

    I’ve still got my Apple Music account to support listening on our house full of HomePods, but between Navidrome and Play:Sub, I’m really enjoying digging into a collection of music that I own. Some of these albums I’ve had for ages and it’s great to revisit them like old friends. (I’ve got on a Cannonball Adderley album that’s been in my collection for probably 30 years at this point—it’s amazing how quickly I can recall his solos on certain cuts).

    A quick tip for my future self and other users: If Navidrome stops syncing with last.fm, simply navigate to the profile->personal link in the upper right corner to troubleshoot.


  • ChatGPT and “Humanities Types”

    In his latest issue of galaxy brain, Charlie Warzel dismisses the value of ChatGPT in part because he’s unable to see the value or potential. of ChatGPT, because the ability to control or drive value from the tool is outside the grasp of most humanities types.

    A good ChatGPT whisperer understands how to sequence commands in order to get a machine to do its bidding. That’s a genuine skill, but one that eludes me as well as some other humanities types I know. The best ChatGPT prompters I know tend to be good systems thinkers or at least well-organized people—the kind who might create a series of automated protocols and smart-home integrations to turn their lights on and off. I’m the guy who sees romance in wandering around in the dark, bumping into a coffee table, to find the switch.

    I would argue that “humanities types” are some of the most well-positioned to be able to exploit the value of chat, GPT, and other large language models. Humanities types as he refers to them, understand language and its power of precise language more than most. The ability to construct exactly the right language or prompt is one of the key skills needed to extract value from chat, GPT.


Current Spins

Top Albums

Letterboxd


Reading Notes

  • Unable, then, to see the world because I have forgotten the way of being in the world that enables vision in the deepest sense, I […]
  • Suppose Bob writes an email to Sue, who has no existing business relationship with Bob, asking her to draw a picture of a polar bear […]
  • The large majority of the world’s decaffeination still happens through chemical-based processes that use things like methylene chloride or ethyl acetate. I don’t know what […]
  • All the forces at play within us and without seem to be centrifugal forces, pulling us apart. I remain interested in understanding the nature of […]
  • FWIW, my Emacs of the moment is emacs-plus@29 installed by Homebrew: brew install emacs-plus@29 –with-mailutils –with-xwidgets \ –with-imagemagick –with-native-comp Source: Browsing in Emacs – Volume […]

Saved Links