55: The solution is always to write more tests
Just before we started recording, I sprung this on you to figure out
or to run a test how long our unit tests take to run on your machine.
Right. And it took me way too long because I didn't realize that. Is that a new thing with
Swift testing that it no longer crashes when the, or not crashes, but no longer fails when the
database is not running? No, I think that's suited the way I changed the with app thing,
you know, the way it changes in the transition to Swift testing. We can't use the setup,
the normal setup anymore. And I think the new setup is different in that it hangs and doesn't
crash. I forgot to add a timeout. That's about to change anyway. So that's unfortunate side effect
of the new setup right now. By the way, is this the show?
This is the show. We are doing a live reveal of improvements.
It was just the way you talked about the with app thing was with no context. So nobody will
know what that is. Just imagine there's some huge complex thing that's happening and it's
happening differently. To give a very, very brief bit of context, we have, as we talked about last
episode, we've recently switched the project across to Swift testing. And it used to be the
case that if the Docker container with the Postgres database that we used to run our database
tests was not running, a test would fail. And you'd know whether it was, well, the first thing
you would do is you'd spot that it was a database connection error and you'd go launch the database.
But that is no longer the case due to, as Sven was saying, the new way that we're setting up
the test there. So the very first time that I ran it, it took about 20 minutes because the database
container wasn't running. - After how many minutes did you think something was wrong?
- Well, I was doing other things at the same time. So I came back to check on it every now and again.
So, but yes, with Xcode 16.2 and the no parallel flag passed to the tests, it was 40.6.
- So the reason I'm asking is, as we talked about last time, one of the reasons in switching to
Swift testing was that it will allow us to run the tests in parallel more easily, I should say. And I
said that last time, it would always would have been possible with XCTest. It's just a lot harder
to do with XCTest. So we use the opportunity to transition to Swift testing as a prerequisite to
do the parallel testing. And the reason I'm asking this question, how long it takes for you,
is to give you a really nice surprise in how fast it could run with the new branch that we
have in the repository where parallel testing is active and working. - Fantastic.
- And you'll be delighted to hear that on, so I think you run this on a MacBook Pro, don't you?
- That's correct, yeah. An M1 Max. - Yeah. So on my M1 Max MacBook Pro,
the tests now run in under eight seconds. - Wow.
- And that's by running against eight databases in parallel. And there's no,
there's really very, very additional performance to be gained if you increase that number. So 16
is effectively like seven. I mean, you might get half a second faster, but it's, there's really
nothing to be gained there. It's slightly faster still on the studio, but also more databases don't
help a lot, but on the studio, I can get it to run in under six seconds with 16 databases under five.
I actually, first time it ran that fast, I thought, well, is it skipping tests? But
it's not skipping tests. - Right.
- And I'm actually surprised that it takes 40 seconds for you running normally because it takes,
oh yeah, I have a, actually there's a huge spread when you run it normally on a single test. I have
35 seconds is the fastest, but I've also had it run 51 seconds. So the jitter is really quite large
when you run the tests like normally, consecutively, serially. What I found really interesting is the
parallel testing, they are super consistent. So they, I get 5.6, 5.7, 5.7. It's like they don't
jitter. There's a jitter of less than half a second between iterations.
I guess that goes down to like it being really well utilized.
- Interesting. - And the way this is set up is that we
spin up a fixed number of databases at the start of the test. And in this case, N is eight.
And then the tests will, the database tests, so not all tests have to go through this, but the
one that need a database will check against the database pool if the database is available. And
if it is, it'll retain it, sort of grab it from the pool, run its tests. And then after it's
finished, return it to the pool. And if there's no database available at all, a test will just
sleep for a hundred milliseconds and then check again. So it is a sort of a polling mechanism,
but it works well enough. And they're not, you know, they're not churning a lot.
And that's the way that works. - And is there a lot of complexity,
is there a lot of complexity in the code for that? - Actually there isn't, no. It's a
single database pool actor that I added. And in the with database tests, so the thing that we had
to introduce to make the database tests work with Swift testing, it really just asked the pool,
give me a database. And the pool will just block and only return once the database is there.
And that makes it really easy to swap in with the existing test runner.
- The test itself doesn't need to be aware of that happening?
- No, there's no change within the test at all. So the only thing that's changed is in our
all tests suite. At the start, we set up the pool and we tear down the pool optionally afterwards.
So that's the only change. There's really no, the tests have not changed at all. I mean,
obviously the only thing that you do is you let them run in parallel, right? Because
they can't run serially with that system. It could be made to work serially, but I didn't
bother figuring out why that's not working. So the flag dash dash parallel or no parallel is
really how you control if you run against the pool or not. So it's been quite,
it's been fiddly, but it works really well and it's really stable. I think the biggest
problem was that we had, as you can imagine, when you have a test suite of almost 800 tests
that have always run sequentially, you will find stuff that is inadvertently not be made to run
serially. So there were a couple of cases where we, for instance, had metrics and they were
always running serially. So there was no crosstalk possible. Metrics is our system of
having counters, you know, like request counters and stuff like that. And we have some tests that
check that request counters are incremented properly. When tests run serially, you can just
take the Delta and you're fine. Even if there's crosstalk. Now, suddenly that with parallel
testing, it would happen that different tests would increment counters and the tests that
actually look at the counters had wrong counters. So there were some parts in the tests that I had
to make sure that we keep separate, you know, make sure that they're thread safe and adapt that.
That was pretty much the work that had to be done to make this work reliably. And other than that,
it's gone well. The only remaining issue is, as you can imagine, stuff like that is always harder
in CI. I found out some weird stuff about bringing up additional databases in CI.
They weren't discoverable as I thought they would be. The ports weren't mapped as I thought
they would be. And there's a bit of work left to be done to make sure that the tests... Because
when you run it locally, I use Docker commands to actually find the databases. It just runs Docker
PS and then looks and we have a certain naming scheme for the databases. And then it can
automatically discover what databases are there. Now our CI doesn't have Docker, so it can't use
that mechanism. So you have to sort of prepare a set of databases where you sort of know the
names in advance. And then it's kind of, it's more fiddly. And I completely forgot that we don't have
Docker and we probably could make Docker available, but I don't really want to go down that rabbit
hole. I probably just use a hard-coded names and make it work that way. But it's even debatable
whether in CI we gain all that much because the build takes 12 minutes in CI and then the tests
take another eight. We can probably get the eight down to maybe one or two minutes in CI, but you
can see immediately 12 versus 14 minutes, 12 versus 18, 20 minutes. The gains aren't as much
as like locally running the tests in seven seconds is a huge difference to 40 seconds, right? That's
going to be a big difference to actually run them all. I think that's the main benefit we get out of
this for our day-to-day work really. Well, you know what this means, don't you? No. We need to
write more tests. Oh, we always need to write more tests. I was bracing for something terrible,
but I'm totally on board with that. No, no, no. Yeah, we need literally double the test because
we need something to keep us busy. What are we going to do all day if we can't run the test?
No more excuses, skipping tests because they are using the database and might be slow.
Exactly. Yeah, exactly. That's great. I'm happy to hear that. Yeah, it's a shame that the CI
won't be quicker because that is so slow that it's a problem, but it's not an enormous problem,
but I certainly don't ever wait for CI to run. I immediately go and start something else.
Yeah, that's the pain. And I think when the Swift syntax being part of the tool chain will
probably make the biggest difference there, that should shave a good five minutes off the build,
I think, for the test. So I'm hopeful that we'll gain something. And then we hopefully end up
under 10 minutes. That's my hope in the near term. And I'm going to slightly change the subject now,
but that change to Swift syntax is what's needed to make macros also quicker to use and run, right?
Yeah, exactly. Right now, as soon as you use a macro or use a package that uses macros,
even if you don't use the macros, you end up, I think, even if you don't use macros,
you end up building Swift syntax as part of your build. And that adds a lot to your normal builds
and to your release builds as well. But that's a separate pipeline for us. So yeah,
good savings there on the horizon. So one thing that struck me as I was
searching for packages for this episode was I'm seeing more and more, and I've seen this for a
little while now, but it really struck me today. I'm seeing more and more packages that say
Swift package manager only, no installation instructions or anything like that for
CocoaPods or Carthage or anything like that. And I mean, that's no huge surprise because
it has, Swift package manager has now been around for, and usable within iOS and MacOS projects for
several years, I think, well, more than five years now it's been in that state. And so you'd expect
adoption of it to be to the point where people might not bother with CocoaPods. But I thought,
as I kind of had this thought, I thought it might just be worth reminding people. I know we did talk
about this kind of in the middle of last year when CocoaPods made an announcement of what their
support and maintenance plans were. But I thought it would be useful just to recap that. And actually
since we last talked about it, the CocoaPods team have put together a timeline
for the shutdown of CocoaPods. So it's going to be no surprise to most people that CocoaPods will
shut down at this point. I think the writing has been on the wall for quite a while that Swift
package manager is going to be the way to include dependencies in any type of Swift project,
whether it's server side or iOS or MacOS or whatever. And it's always good to know that
timeline. So in January, so the basic timeline is this. In January of this year, an email went out
to people who had contributed a pod spec. So anyone who, any package authors basically.
And they're going to be, or they were informed that the CocoaPods repository was going to switch
to read only with a link to the blog post. Nothing now happens until almost the end of next year.
So September, October, 2026, another email will be sent out to everyone who has contributed a
pod spec and saying that they basically have a month before a test run of going read only.
Then in November, 2026, there is going to be this test run giving automation a chance to break
early. So Autoserse, who's the person who wrote this post, Autothorox. And so he has a good test
for everything to break and then it will be made not read only again. And then finally in December
next year, CocoaPods will become read only and effectively shut down at that point. It will still
be available. So none of your projects that use CocoaPods, if you have any left, are going to
break because the pods will still be able to read, but you won't get any updates. You won't get any
new packages or anything like that. So it's always worth being aware of this far in advance. And
you've got at least 18 months if you do still use CocoaPods to basically make that switch to
Swift Package Manager. Right. Remind me, I'm always unsure, you still need it actually on iOS,
right? I mean, if you wanted any sort of package management, right? Other than
pulling in libraries and- You need CocoaPods? Yeah, but on iOS, you can't, Swift PM can't build
an iOS application from a package manifest. Ah, Swift PM as a build tool can't build it. No,
that is correct. But Swift Package Manager through Xcode or Xcode build can. Right. Okay. Yeah. Got
you. The package management part it can do, but it can't build the actual final product because-
It can't do the actual build. No. No. You can't express it in the Swift
Package Manifest, but Xcode obviously has a project file that can. Okay. Yeah.
Exactly. And again, this is the kind of thing that will go away with the new build system,
which is coming because it's a confusing situation that we're in at the moment where we have two
build tools to different package management tools. It will be great to get all of this on a single,
happy path through. If you would like to make an application, here is the build chain and
dependency chain that you can use to do that. So I'm very much looking forward to that problem being
solved. Yeah. Yeah. And to have a one-to-one answer, a one-to-one mapping, this is the command
line you would use to do the exact same thing. And right now it kind of depends. There's two ways of
doing it. And well, on iOS there isn't, but on macOS there kind of is for depending on what kind
of thing, it's complicated. It is. And it shouldn't be, it really shouldn't be complicated.
This bit should be the easy bit. Yeah.
And I'm confident that we said this last time, but I'll reiterate it. I think the tremendous
effort that went into CocoaPods over the years absolutely has to be acknowledged as an outstanding
piece of open source work. They took what was an extremely difficult task. I mean, we think it's
difficult now. Before CocoaPods, this was Git sub-modules and source code, and they took that
process and made it just, well, possible for most people to do. And I think the way that project was
run was always exemplary and the support that they gave. And I just can't say enough good things about
the work that everybody who contributed to CocoaPods over the years did. Yeah. And quite
a challenging thing also because the upstream can change under you, right? You're always sort of
chasing new Xcode releases and stuff that might change in certain ways. And I find that super hard
to have to track that. And I think that consistency with how they're maintaining this has
gone right through to the shutdown plan, which is two years notice of a shutdown plan, which is
fantastic. Yeah, absolutely. There's commercial products that get shut down faster, right?
Exactly. I was going to say there can be so many products in the commercial world that just
disappear one day. Maybe they lose their funding or they decide that it's no longer worth it. And
then the next day they stop. Exactly. So, okay. So that's, yeah, just if you still have any
projects that are on CocoaPods, make a plan to switch across to Package Manager. Yeah.
Now's a good time. Right. There's another bit of news I saw on the social networks, a post on
Mastodon about the value of open source software. I'm not sure if you've seen this one. It's a
Harvard Business School and University of Toronto study to quantify the value of open source
software. Did you catch that one by any chance? I didn't. No, that's news to me. Yeah.
It's, I mean, this is, as you can imagine, an interesting question, but also a very difficult
question to answer, right? What's the value of open source software? This study is attempting
to quantify that. I've read the paper. It's as messy as you can imagine. There's a couple of
different data sources that have been used to try and get a feel for how to answer those questions.
I'm not sure how much to go into the details. It's an interesting read. I think it's probably
best to focus on the takeaways. So their approximation is there's a $4.2 billion value in
the initial value of the open source software and the value to the companies actually using it is
actually much, much higher. So that's estimated at $8.8 trillion. Right. Of course. And the
difference as you can imagine, so the way they actually determined the value, the created value
is by doing a labor costs replacement. That was going to be my next question. Yeah.
Yeah. So what they're effectively doing is they're looking at the lines of code in packages,
and there's another study that sort of has established a formula. It's a big name for
multiplying a few numbers together, but there's a formula to get to a value from a number of lines
of code. And what that tries to do is estimate, well, if you had to reimplement that, how much
would that cost? And that's the $4.2 billion. And it's worth noting that this is a pretty much
a lower limit because they only looked at a smaller section and they left some open source software
out of the picture entirely. Like for instance, open source operating systems, which are not
insignificant, were not part of the study. So you can already see, you know, this is in fact,
extremely significant. Yeah. Kind of. Right. I found that curious why that isn't part of it,
but you know, it's, you have to start somewhere, I suppose. And you know, you have to, to, to at
least do the bits that you know you can do. I guess that was perhaps the approach there.
So they arrived at that. And then I, I don't know how they arrived at the multiplier,
but effectively, so there's two ways of looking at the, the value created for the companies. One is
if you lost all open source software and then it was recreated in an open source fashion, then,
then it was, would be just $4.2 billion and everything would be as before. But obviously
the multiplier comes in if every company had to actually re-implement all the open source software
they're using again internally. Right. And then you can see like, obviously, you know, N companies
doing the same thing will multiply that by N, you know, that's how you arrive at high
multiplier from 4.2 billion to 8.8 trillion. So, so that's that. Yeah. I thought it was
interesting, also interesting to see, and maybe as an incentive for companies to look at that and,
and see, you know, be yet again, reminded of the value they're reaping from open source software.
And that's, and that's the problem with, with any story like this is it just doesn't get enough
publicity. So nobody who, nobody who would have the ability to pay any of that money will ever
read that story. Yeah. Yeah. Yeah. I mean, absolutely. Right. Because it's free, right.
It's, it's so hard to compete with free and it just doesn't well, and also it just doesn't even
get discussed at, in that context. It doesn't, it doesn't enter the conversation in that context.
Yeah. It, it seems like the paper hints at it in a couple of places that, that
countries or economic blocks like the EU are getting increasingly aware that open source is
kind of infrastructure and they are starting to fund open source and see it as a means of funding,
because it's like building roads. It's like building like literal, like virtual roads that
are used by just like roads help your companies move goods. So does open source software help
companies move data and do their stuff, right? So there is a, there is a incentive for economic
blocks to, to help fund open source software, perhaps at some point in the future, look at
getting specific, like, you know, diverting taxes towards that, perhaps, you know, you can imagine
all sorts of things where, where, you know, revenues or income in certain areas that get
diverted or get, get put into pots of funding for these sort of programs to, to help. Because
there's other interesting stats that came out of this, like 5% of open source authors create 96%
of value, at least of the, of the parts that they looked at, which is extraordinary, right?
That's, that's, that's an amazing figure. Of course. Yeah. And I mean, on the other hand,
it's, it's probably, it's very tempting to, to shoot down a paper like that and say, well,
I mean, how could that ever be correct? But I'm not even sure the point is to be
super correct, right? You need to get a baseline, right? To make a point. Yeah. You, you, and you,
you effectively, you, you underestimate. And I think that's really important that is, it is wrong,
but it's going to be wrong in a good way, right? You're not, you're not overestimating the value,
you're, you're clearly underestimating it. So you're, you're sort of setting a lower bound for
what the value is, and that's already a tremendous value. So it gives you all incentive to, to, to,
to help fund it further, right? To, to push this forward. So because there was a formula in there
that sort of converted lines of code into effort, I, I of course took it as gospel and took our
50,000 lines of Swift that we have in the Swift packet index. And apparently there's 18 person
years of effort in it. Now, given that Swift packet index is five years old, and we're two
people, that's 10 person years. That basically means we're two ex-programmers, right? We are,
we are, we've, we've, we've beat the, we've beat the algorithm there.
What I was going to wrap it up with, but I mean, you've made the perfect,
the perfect ending there. What I was going to wrap it up with was if we write more tests,
we get, we're worth more money, right? Oh, absolutely. That's, that's the other thing.
And also now that you have a formula that converts lines of code into effort, you know,
how you often asked, well, how long is this going to take? Now you don't have to, you don't have to
answer how long it's going to take. All you need to figure out is how many lines of code will you
need? And then you can convert back. That's amazing. Estimating will become so much easier.
It's amazing. Easy. Yeah. Easy. Easy. Fantastic. Shall we move on to some packages?
Let's do it. Do you want to go first? I shall do so. My first package is called
just codable. I mean, codable, not just codable, but you know what I mean? It's by,
by Andrey Chernenko. And I've seen a few of these. It's using macros to help you implement,
help you with a codable implementation. I think we've even talked about a couple of,
of these in the past. The reason I wanted to mention this one in particular is that I saw it
has a @memberwise initializable macro as well. And I thought that's, that's not something I'd seen
before that probably exists. I just hadn't noticed it. But what that does is it allows you to tack
that onto a struct and then it will like in auto-generate the public, the initializer.
And why, the reason why that's nice is because you can then just tack public in front of it.
And then you don't have to, you know, like generate it yourself and maintain it in particular. Right.
You know how often it's, it's really tedious to have a struct that suddenly needs to be public
because then you need to, that initializer. And that is then fiddly because yes, you can
auto-generate it once with the Xcode refactor tool, but that does it once. If you add properties,
then you need to go in and add the properties. And this is nice because you can just tack this on
and you don't have to do anything with it. It'll, it'll always be in sync with your,
with your properties. So I think that's even, that would be even nice to just have as a standalone
package. And I also went to, wanted to mention while talking about this, there's a, currently a
Swift forums thread about the future of serialization and deserialization, which talks specifically
about using macros to, to do effectively what this package is doing. So this is on the horizon to
become a, a proper, you know, foundation, I guess, or at least Swift, Swift standard library thing
at some point. So the, the codable thing will, will sort of transition at some point,
all these packages will sort of be, I guess, Sherlocked by the standard library. But this
memberwise, memberwise initializable thing is definitely a nice thing that might then survive
in a separate package. - Sure. So do you know how well I, I know you? I spotted that package.
I thought that sounds interesting. And then I thought, Spencer's probably going to pick that.
So I, I put it on my backup list. - Thank you.
- You didn't let me down. My first one is a package called Aesthetic Text by Kyle Bashore.
And I talked about this a little bit in iOS Dev Weekly, but I thought it was mentioning,
worth mentioning here too, because there's a couple of interesting things about it. First of all,
I think the package itself is great. What it does is it's, is it's a Swift UI package. It's a view
modifier that you can add to any Swift UI view. And you just tack on the end of any text view,
the view modifier Aesthetic Text. And what it will do is it will very cleverly work out where
the line breaks should go in any text that was passed to it. So that each line of the text
is approximately the same length. So you never get one line that's very long followed by a line
that's very short, which is a common UI problem, especially when it comes to text that is centered
in the middle of a view. So if you have like a empty state view or something that's, that's,
that has some centered text in the middle of the view, having different length lines looks really
awkward. So that's what the package does. And I think, first of all, I think that's,
that's just a really great little tool that you can use to add polish to your apps before you
release them. But also, and in relation to my next package, which we'll talk about in a minute,
this is a great example to me of a package that is UI based, which a lot of people are wary of
adding UI based packages, but it's the kind of package that if it ever did get abandoned,
or if it ever, if anything did happen to it so that you couldn't use this package anymore,
removing a package like this would be super easy because all it would do is put the text back on
to uneven lines, which is slightly less polished, but not the end of the world.
And I thought it was, it was worth mentioning for both those reasons.
- I think where this is also great is in a localizable context, because you know,
you might do the proper line breaks in your primary language and then forget about all the
other languages that have potentially quite different word lengths and stuff and, you know,
sentence structure and have quite different layouts.
- Yeah, of course. Yeah, that's great. That's a great point. Really great point. So yeah,
that's Aesthetic Text by Kyle Bashaw and it looks great. There's also a great screenshot in the,
I love people who put a good screenshot or even a video in their readme files.
It's always good to know. I quite often, when I'm looking through package readmes for preparing for
this podcast, if a UI library or something related to the UI doesn't have any kind of screenshot of
it, I'm afraid, rather than pursue it, I just close the tab.
- Well, you'll be happy to know that this is the fourth on my list and I put it fourth because I
kind of expected you to talk about it.
- Because you knew I would pick it.
Well, I'm glad we're both living up to our reputations.
- Yeah, there we go. We have nicely segmented into a certain package picks.
- Exactly, yeah.
- Although we might have a conflict coming up. Let's see. I sort of suspect we have one that we
both...
- Setting up a suspenseful finish. Yeah, exactly.
- I don't think the second one falls in that category. That's called RT-SAN Standalone
Swift by Joseph Kawa. So this one is a bit difficult to talk about. So I think this is
actually a Vervesoft software pick, one of Dave's picks. In all likelihood, you won't need this
package. Very few people will. It's a fascinating package though. So this is a package that uses a
feature of Clang and the feature is the real-time sanitizer. Now, most people will probably be
aware of the sanitizers in Clang that we use commonly like T-SAN, which is the thread sanitizer
and ASAN, the address sanitizer. And what these two do is thread sanitizer to highlight cases
where you might have race conditions and stuff like that. Concurrent access to shared data
structures and ASAN is where you write over address space, stuff like that. And this is
sort of similar. It uses a real-time sanitizer feature of Clang where
methods or functions have been annotated with a non-blocking attribute in Clang. And what that
means is that they have a deterministic execution time guarantee. So they will return
deterministically. They won't just hang. And what that means is if you annotate,
this package comes with a binary library and a macro, which you can add to a function of
yours. So you can say add non-blocking as a annotation. And then what you're expressing
there is you don't want this function to call any blocking API. And blocking can be stuff you don't
actually... I mean, when I saw the example, print, print is actually in that context, a potentially
blocking API because under the hood, it might actually hang because certain buffers aren't
a fool or something. There's no guarantee on the operating system level that print will actually
return deterministically. And there's lots of malloc and stuff like that, all functions like
that. So if you have code that sort of needs to have certain real-time guarantees, needs to finish
in a certain deterministic execution time, you can tack this on. It'll give you runtime. You can
configure what it'll do. It'll either trap and give you a printout or it'll run and at the end,
give you a summary of cases where you've actually called API that didn't meet that guarantee.
I found that really fascinating that you can expose that sort of stuff on the Swift level
and then use it at such a high level and get the feedback from Clang. And I found that really
interesting. And I guess it's also a look into the future, what might be possible with other
instrumentation like that, that you can then expose in macros, give you really powerful tools
to say, "Right, this is a critical section in my app. I can't have this block." I can think of lots
of industries where that's super interesting. There's obviously other programming languages
that have that as a feature. There's real-time operating systems that have that as a feature.
I mean, this is a part of that story, Swift as a really powerful and safe language. And
you can sort of, I guess, in certain ways, get part of that feature set and pull it into
a Swift application via that route. Super interesting, I thought.
- Yeah, absolutely. Fascinating. My next package is in contrast to my previous package,
which was the one that I said, it's the kind of thing which is kind of easy to remove if you ever
need to. This one is a package that I really like the look of. It's called Swift File by Pelagornis.
And it's basically a refinement of the API in Swift for accessing, creating, writing to, reading
from files on disk. And I've always found those APIs a little awkward to use, both in Swift and
in Objective-C, because they're ultimately calling the same functions. And I know they have got much
better since the adoption of Swift, but I still find them a little clunky to use in certain cases.
And what this does is it gives a real streamlined, super concise API to create files. So
you can, for example, call a create file function. You can call file.write. You can,
you know, path.temporary will give you the temporary folder and things like that.
So you can really streamline that API code. And I really like that. And I hope that one day
Swift and the standard library tweak those APIs, as we all hope that all APIs get easier to use,
right? There's no pressing problem there, but it's the kind of thing that could of course be improved.
But this package, I would be really hesitant to place at the core of any app, because if you're
doing a lot of file management stuff, it's likely that that code is going to seep into your
application in a way that might be difficult to remove in the future. So I really like this
package. I think it's kind of interesting to see what this API might look like, but it's one of my
traditional, it's good, but I maybe wouldn't recommend it packages.
Actually, yeah, we didn't actually have a conflict. You didn't pick the one I thought
you might pick. I've still got one more. I've still got one more. Oh, you still have one.
All right. But I, oh, right. I started this time around, so I might still.
But I'm fairly confident that it's not going to be, my last one is not going to be.
Okay. Well, let's see. So my third and final pick is RenderMeThis by JustEther. That's the username
associated with the package. It was on my short list and my package is by the same author.
Oh, there you go. Fantastic. Nice. So this is a really interesting package. Do you know how SwiftUI
is magical in that it detects what has changed in a view model and then updates the UI bits that
are affected? Absolutely. Yeah. And you just see it happen, right? You see the UI update and you
think, oh, that's fantastic. The problem is it's quite easy to accidentally oversubscribe and then
trigger way more changes than required. And that's not something you necessarily easily see because
it might just re-render and it doesn't look that different. And this package is a great tool because
it provides both a view builder wrapper as well as a view modifier to render check a view hierarchy.
And by render check, I mean, is it will sort of flash in red the parts of the UI that have just
updated and sort of flashes red and then gets translucent. It's a really nice effect. If you
look at the ReadMe, there's a video showing it in action. It's really great. I mean, if you look at
the ReadMe, you know exactly what it does immediately just by looking at it. And it's so
easy to add, right? You just tack on a view modifier or just wrap it in a view builder if you
prefer to do it that way. And then it just works. It does its thing and shows you what actually is
happening. And you can still interact with your app, right? So you can just, in the example,
there's moving a slider and then you see a label is changing and obviously the slider is changing
itself. But you see all the parts of the app that might be affected by the slider at a glance. So I
thought that was really great. Yeah. Well, I can say it was on my shortlist. And I think both of
the packages, because my next package is by the same author, as I said, have great ReadMe files.
Both of them jumped out at me in terms of potential to talk about. And so that's always a good sign.
The package I'm going to talk about though is called GlowGetter,
G-L-O-W-Getter. And it is also a SwiftUI package. But it does something really interesting. It
kind of enables the HDR mode on your phone, or I guess on your Mac, if you have a HDR compatible
display on your Mac, and allows views in your SwiftUI view hierarchy to become brighter than
they would normally be able to with standard APIs. So normally the UI that you create can only become
as bright as the screen is currently set to. But if you're doing an HDR image, like if you go into
the Photos app, you'll see your phone screen quite visibly brighten when you show a picture that was
taken with HDR information in it. And what this allows you to do is apply that as a view modifier
to any view with a .glow view modifier. And the strength of the parameter that you pass to that
modifier determines how much it's going to glow and kind of stand out. And I've never seen anything
like that in terms of SwiftUI view modifiers, but I have seen an app which is kind of similar.
And in fact, at the bottom of the README file for Glow Getter, there is an acknowledgement to
Jordi Bruin and Ben Haraway, who wrote an app called Vivid, which allows you to take your MacBook
HDR display screen and max the brightness even beyond what is kind of standard for the screen.
And it does that by enabling HDR mode and then using that to then increase the brightness.
And it says that this repository, the Glow Getter repository, adapts some code that was built for
Vivid that lets you use your MacBook at the maximum brightness. So I thought it was kind of an
interesting package in terms of I've never seen anybody bring this into an environment where it
would be this easy to apply to something. - Nice. Beautiful icon too in the README,
the Glow Getter icon. - Yeah, really. Yeah.
So I think it's kind of unheard of actually that we have both picked a package by the same author
in the same episode. And back-to-back packages by the same author. So Aether is truly the star of
this episode. - Absolutely.
- Great. Well, let's wrap it up there then, shall we? This is, I think this is actually the first
one in a little while that's been bang on schedule. So we are running exactly to our
self-imposed clock. - Back on track.
- Unlike normal. Back on track. Yeah. So you should expect, here's me confidently saying,
you should expect another episode in three weeks. Let's be confident about it. But until then,
we will say goodbye for now. And I will see you next time.
- See you next time. Bye-bye. - All right. Bye-bye.
Creators and Guests


