Episode 18

News about AWS Lambda, Graviton, Google SEO, Swift 5.6/5.7 compatibility, as well as package highlights from the last couple of weeks of the Swift Package Index.

I think we should start by saying Happy New Year, shouldn't we?

I think that's a good idea, yeah.

This is the first episode we have recorded in 2023,

so Happy New Year to everybody listening,

and of course, Happy New Year to you as well, Sven.

Yeah, Happy New Year.

We had an extra week off there due to illness on my part,

but we are back now, all healthy again.

And, well, I guess the extra week helped a bit,

because the news was light over the holiday break.

But I think we've got a couple of things

to talk about anyway, don't we?

- We do, yeah, and certainly I enjoyed taking a little time

away from work over the holidays.

It was nice to take a break away from everything.

So I think you feel the same way about that as well.

Although obviously your illness towards the end of it

is less than ideal.

- The terrible thing is you lose any sort of fitness

so quickly, it's really harsh.

I mean, you can hardly walk up a flight of stairs

after five days in bed.

It's terrible.

But yeah, there we go.

Yeah, so what did we do actually?

What's what's new in Swift package indexing land?

For my part, I had a bit of fun working with AWS Lambda.

That's perhaps worth talking about for a moment because I

found that quite interesting.

Actually, that was the first time I actually played around

with it.

And the reason we're looking into this is we are having some

trouble uploading our doc sets, the Doc C archives to S3, where we're hosting them,

when they're very large. And we do have a couple of cases where they're really large.

And in particular, not just the size themselves, it's the number of files. One of them has

more than 100,000 files and gigabytes worth of data. And the problem is our builders who

are doing the upload sort of, you know, they have like a 15 minute budget to do all their

work and that's just not long enough to do that. So the idea was to zip it all up, ship

the zip file which is fast and then have a lambda deal with the upload part of it. And

that's pretty much set up. I fought a bit with AWS and its myriad of settings. It's

like always daunting when you fiddle around with AWS because everything is like comes

and dozens, you know, there's dozens of permissions on everything, there's dozens of services.

I really often don't understand where to even start. But I had a lot of help here, especially

from Fabian Fett and Matt Massicotti, who were really helpful in getting me set up and

have a system in place to actually deploy easily. And deploying is actually quite interesting,

The first thing I did is I set up deployment into a Lambda like you normally would with a GitHub action on a Linux runner, an x86 Linux runner,

that then deploys into a, builds a Docker image, well it's not a Docker image, it builds a zip file with a x86 binary.

It uses a Docker image to build this, but then extracts the binary and that bundled up zip gets uploaded to AWS for deployment.

So all of this is x86 based.

And obviously when you're working on an M1 Mac as I am,

you really want to improve your turnaround,

rather than have to trigger GitHub Action that then does it on GitHub's side

and then triggers that to upload.

You really want to actually have to run that,

have that running locally and then push directly.

Yeah, like that process of having to invoke a whole mechanism of CI

of CI and uploading into AWS, I can't imagine actually being productive in that kind of

environment. So local development is an absolute must, I agree.

Yeah.

Yeah.

Um, but this isn't even as much about local development because still the testing I did and

has to be pretty much in the, uh, in the Lambda.

And that's mainly because of my network connection.

I can't actually try any of this stuff locally because, you know, I'd be sitting there for ages.

My network connection is terrible.

but I still wanted to improve the turnaround

of not having to go through a GitHub action

because the runners obviously aren't as fast

as an M1 Mac building the whole thing

and then uploading that.

And the interesting bit is that you can actually build

a ARM64 binary using the ARM Linux Docker image

that Amazon Linux has ARM64 Docker images

and you can use those to build natively on an ARM Mac

bundle up an ARM binary and upload that as a Lambda

because AWS have Graviton CPUs,

which are ARM-based CPUs, and they run it.

And they run it perfectly fine.

- Oh, interesting.

- And they run it at the same speed.

So in my testing, I was unable to tell

which actually ran it.

And on top of that, the Graviton CPUs

are actually 20% cheaper than the x86 ones.

So I think when we're going to deploy this,

I'm going to spend a little time, not a whole lot, but a little time trying to actually

change our GitHub action setup to not build on x86 Linux, but instead build the whole

thing on an ARM, on a Mac, if I can.

I'm not sure if GitHub actually have ARM Macs at their disposal or if there's a way, an

easy way to actually maybe self-host.

That's something I need to want to think about because 20% cost isn't, that's significant.

That's an amount you probably want to spend a little time maybe improving if you can.

So I found that really interesting how seamless you can deploy to a Lambda with stuff that you build on an ARM Mac.

I think your real reason for doing that was your New Year's resolution to never again use an Intel CPU, right?

Well, where it gets really interesting is that you then again develop on the same architecture as you ship.

That hasn't been a problem for us for the vast majority of time.

I mean, most of the time we actually don't notice that at all.

It makes no difference for us.

I mean, we don't even develop on the same architecture.

We don't even develop on the same OS.

So what we actually do, we develop on ARM, Macs with Xcode

and run stuff on Mac OS, but we actually deploy on x86 Linux.

So we're actually crossing the boundaries twice.

And the only time that actually has been a problem

when there were, I don't recall what the,

I think it was some issues with our tests and concurrency,

where I really wanted to run our actual Docker image

to see what was going on, because the only errors we got,

the only failures we got were in CI.

I never was able to reproduce it locally on Mac OS.

And that was the time when I really thought,

I wish I could just run the same thing locally,

because what I had to do is run the whole thing

on one of our Linux VMs, on our hosted VMs,

through remote VS code into a Docker.

It was a whole thing to try and get this running

and to try and reproduce this issue

on the actual target platform with a target OS.

And being able to take one of those out of the equation

would be really convenient,

I think, at least in those cases it would be.

- Interesting.

Because if you think about you could even host not just lambdas but just any service

on a docker run service on a Graviton EC2 node, then you again you're crossing the boundaries

with in terms of your OS but not only in terms of your architecture and you get a 20% savings.

That's actually quite significant and really interesting.

And these I think the Graviton 2 nodes and I think they even have Graviton 3 which are

promising to be more performance.

So I think that's really interesting

what's happening there in the server side space with ARM.

- I've not come across this Graviton thing.

Is that the brand?

Is that a manufacturer?

- That's what Amazon call their ARM CPUs, yeah.

- Oh, okay, right.

I haven't done a lot with AWS at all actually.

And so I guess that's the first time

I've come across that term.

The other thing that's worth briefly talking about with this feature is that,

as I understand it, at least when we ship this, we're going to ship

it with effectively two code paths.

So every documentation package, which is under a threshold, I believe it's

500 megabytes, is going to still use the old process, at least initially.

And then only a documentation archives, which are over that threshold,

we'll use this zip up and Lambda extract onto S3 process.

But as I understand it, that's really there to be

a little bit of a, let's not throw all our eggs

into one basket all at the same time.

And over time, I think if the Lambda version is successful

and if it works as we expect it to,

is the plan to move across to always that,

or are we always gonna have two code paths, do you think?

I think we'll lower the threshold bit by bit.

I mean, right now, by setting the threshold

where we start doing this at 500 megabytes,

we're actually not changing anything

because we have a hard limit of 500 megabytes right now.

- Right, yeah.

- Where we just don't host documentation

doc sets larger than that.

There's one exception, and that's the one

where that triggered that whole exploration.

But the nice thing is, we're actually,

even if this goes horribly, we're no worse off

because right now we're not even accepting them.

Right.

And then we're starting with the worst case.

So, you know, once these work, we can just lower the threshold

and onboard more and more packages into this code path and retire the old one.

With it, you know, it has the downside of a bit of extra complexity

by adding another step to the chain.

But if we're honest, having the builder do two things isn't great for other reasons either,

because it's sometimes hard to tell if a build succeeded,

it doesn't necessarily mean that doc upload succeeded.

So we're sort of blurring the lines a bit

in the success state of that stage.

And it's better to actually have a separate stage

and then subsequently also have that stage report

its success and failure separately.

- I believe this is also going to introduce a,

it'll be very difficult to notice in the real world,

but it's also going to introduce a very slight delay

onto when we claim documentation to be available to when documentation will

actually be available. Because the way that it works at the moment is the

documentation gets uploaded as part of that build process. And so we go the

whole way through the build process, then the whole way through the

documentation generation, and only when the whole thing has been uploaded to S3

and we're all completed do we ping back to our server and say the build is done.

you can now mark this package as having a new set of documentation.

And unless you've done something that I'm not aware of,

we're now going to be in a state where we'll do that ping back to the server

at the point that we kick off the Lambda execution.

And so for however long that takes,

the server will think that there is documentation there,

but it won't yet be there. Is that fair?

Yeah, the worst case is if I can get the runtime down a bit,

which is the last bit to fix here, it might be a couple of minutes gap.

But that's the thing that we can ultimately address by having the documentation stage report its result back.

And then we can gate the showing of a documentation on the success of that stage rather than the build stage.

And that will actually then get everything back into alignment again.

Right, that's good. Yeah.

And like a couple of minutes here and there,

if somebody clicks a link to a version

that has literally just immediately become available,

and they get a 404, and they hit back,

and they wait a couple of minutes, and they hit it again,

it's not the end of the world.

It's not a terrible situation,

although it's obviously not ideal.

- Well, the thing is with a normal package,

like less than 100 megabytes,

that isn't going to be minutes.

it's going to be 10, 20 seconds, perhaps.

I mean, my testing with a small package,

it's the Lambda trigger is near instant

and the unzipping and uploading takes like 10 seconds.

So I don't think in practice

for the vast majority of packages,

you will actually be able to hit that.

So I think we should be fine.

And in cases of these large packages,

again, we're probably no worse off

than not being able to ship them at all right now.

- That sounds like you're setting me a challenge

to find a package and get the 404,

I will report it as a GitHub issue as soon as I do.

- I will angrily close it immediately.

(both laughing)

- Well, that's interesting.

And certainly, I think, you know,

obviously we've been working on documentation now

for quite a long period.

It has been a significant, really significant feature.

And at every step, you know,

we've chipped a version of this feature and we're iterating it over time now.

And I feel like if we can host really large documentation sets, like, like

we're attempting to do here, that is a great step forward and it's, it's also,

it's the kind of thing that we wouldn't, we wouldn't be putting all this effort

into enhancing this feature if people weren't using it, you know, we've seen

great adoption of package authors opting into documentation generation.

And that's what makes us want to then make sure that we're doing the best possible job that we can to make sure that documentation is hosted without any hassle from the package author, as good as we can possibly do it.

Yeah, exactly. The big doc sets are the interesting ones, right? They have lots of documentation. So you want to be able to host them and make them available. So yeah, definitely.

Right. So that was my travise with AWS Lambda.

I think we talked briefly about some Google timestamp or Google SEO stuff last time.

We did talk about it last time. So we talked about the canonical URL issue, which is where,

and I won't go over the whole thing again, but it's, if you did the lesson last time,

it's basically every time we do upload a set of documentation and we're indexing that or allowing

Google to index that documentation, it's seeing lots of copies of either

identical or very, very similar sets of web pages and it's actually penalizing

our search rankings for, and in fact it's not even penalizing, it's excluding them

from the Google index because it thinks it's spam basically. It's just

repeated content. This is just some random website trying to get SEO juice

And, and, and that's what it thinks we're doing.

And we talked a little bit last time about the fact that we're going to have to put a

canonical URL on the latest set of documentation and have all the older sets point to that.

And Google at that point, know that it is archived sets.

This is the correct version.

And hopefully we'll then start indexing this better or indexing it at all in some cases.

But we've also been fighting with Google a little bit on some other metadata, which we

added into the index a few months ago.

And this is an issue that has been open and closed too many times, I would say.

So the idea originally was Google has this thing where if you tell it the date

that this page was last updated, it will put that date on the search results in Google.

So you might see a package and what we were intending was that the package, the

date that it would display was the last release of the package.

So you can, from Google search results, start to see, is this something I actually

want to click on or should I be looking for a different link?

And the first attempt that we had at solving this was we used something called

JSON-LD, which is in a webpage.

So it's not a separate API request.

It's some JSON that you embed inside a webpage that Google and other things can look for

that basically gives structured data to bots that are reading the page.

So we describe, and it uses the schema.org definitions of describing various bits of

data.

So we describe every package as a software source code and we supply various bits of

information, like for example, the last date it was updated and the language it

was written in and, you know, various other bits of information.

That's been there for a while now, and Google should be using the

dates that are in that JSON.

But it also says, if you read the Google documentation, it says it might not

because anything that's not visible to humans, Google doesn't like to trust

that because you can try and trick Google by putting a whole load of stuff in a

secret or hidden tags that humans will never see.

And you can, like a lot of people use that for bad purposes, you know, of course.

And so what they suggest you do is they say you should put in text on your page somewhere

last updated at and then a date.

And we did that a few months ago and I was checking on it every few weeks to see if it

was updating.

If it was updating, sure enough, a lot of packages started showing dates on their listings,

but not all packages.

And it was very consistent.

It wasn't that some were coming and some were going.

It was just refusing to do it for some packages.

And so I tried all sorts of stuff in this issue.

I tried moving where that date was placed on the page, because originally I'd put it

right down at the bottom of the page.

And I'm thinking, well, maybe Google is looking at like, it's spread, maybe it knows it's

right down at the bottom is saying, well, that's, that's really, you should make it

more prominent.

So I moved it up onto the sidebar and it was there for a little while.

And then of course, every time you make any change, you have to wait a month or so for

Google to kind of re-index and re, you know, it doesn't instantly make these changes.

And so, uh, it's been a bit of a, an ongoing process.

Anyway, just before the end of last year, I had, um, one last idea, uh, as a result

of a conversation with a friend of mine.

And the idea was to put the date in a time tag instead of like a P tag or

something like that, you know, the span or whatever, and in, if you use a time tag,

you can, there's an attribute on it called date time, and you can put a date in that

as well, and I think it was you Sven this afternoon checked into whether that had

any results and, one of the two test packages now has a date on it as a result

that change. So that's very promising. And I think over the next few weeks I'll

watch that and see if that filters through to more packages that didn't

have a date on them. So you wrestle the Google machine into accepting

hard dates? Yeah, I mean you can never really tell. And this is quite in the

weeds. Like this doesn't affect anything that's actually on our package page. This

is only inside the Google search results that you'll see, you know, if you do a

a search for something and the Swift package index happens to be a search result that it

is going to suggest to you, it's at that point that we're trying to put the date there.

So it's kind of minor really, but I'm glad that this is finally looking like it's potentially

going to work.

There's no way in the search console to see what it actually saw.

Like also the JSON-LD stuff you mentioned earlier, is that actually exposed somewhere,

what it actually crawls out of a page?

Not in Search Console, but there is a tool that Google have where you can give it a URL

and it will tell you what it sees in terms of JSON-LD.

Right.

Like an internal tool or something that you or we could use to see what it actually sees?

Yeah, no, and that's been working fine.

So the JSON-LD, there's actually been no changes to that.

And I checked that it could see all the JSON-LD and it's very happy with the JSON-LD.

that's certainly that's giving us points

with Google for it to think that we're

a good citizen you know but what it then

says is it said well that's great and

you should definitely do that but you

should also put the date on the page

somewhere. Well slowly but surely we

we get the machine to work in our favor.

So shall we do some package

recommendations. Yeah, maybe one last thing to mention, we recorded an episode on Leo Dion's

Empower Apps podcast last night. That was a really fun conversation and it should be coming out later

this month, I believe. So keep your podcast players primed and have a look into Leo's feed,

not just for us, but in general. I think this is maybe a good time to say that I've been listening

to Leo's podcast for quite a while and I'm always impressed the people he finds, often voices I'd

never heard from before and it's often really interesting topics. A while back he had someone

talk about app clips which was really interesting because I hadn't heard anyone talk about them in

a while and it was really nice, really great podcast. Give it a listen. Yeah it was fun to

record with him yesterday. All right so let's do some package recommendations. Well I'm going to

stop you yet again.

I'm really keen to get to the package recommendations.

Yes, I know, I know. I'm going to keep you just a moment longer.

I have a one question, little quiz.

I remember a while back we were wondering,

we were talking about our Swift version support

and the question came up, well, how many packages

are there actually that support 5.6 but not 5.7?

- Hmm, interesting, yeah, I remember.

- And I had a little dig into our database,

wrote some horrible SQL to get the answer to that.

What do you think, how many packages

support 5.6 but not 5.7?

- I feel like most 5.6 packages,

'cause 5.7, I don't, my memory may be failing me here,

But I think 5.7 was a fairly mild release

in terms of the possibility to break anything.

I think it added some stuff.

So I'm gonna say that it's really quite high.

I'm gonna say it's 90%.

In fact, no, I'm gonna say it's 95%.

- Right, so you're saying 5% of packages

are incompatible with 5.7, but--

- It broke 5%, yeah, exactly.

- It's actually less than 1%, it's 46 packages.

And yeah, you're right, I actually looked at

what they were failing with because I couldn't think of anything that would cause a failure.

And a common thing that jumped out was ambiguous use of split, split, separated max splits

omitting empty sub-sequences is the actual call.

This is a function on sequences.

And I guess there is a overload or something that packages had defined themselves.

I noticed that a lot of packages were server-side and vapor packages that perhaps had this defined

themselves and or were using a package that had this defined, maybe pulled it forward

and now it's conflicting or something.

I mean, it doesn't look like it's a huge thing to fix, but it did actually flag a real problem.

And that made me think maybe it might be interesting to look at these whenever there's a transition

to see if these are happening, if there's a common theme and maybe make a transition

guide for people to immediately sort of know, all right, this is what I need to do.

I haven't given this any thought.

This is the first time I've even had this idea.

But I wonder if there's any kind of way to make that complex SQL that you wrote to get

this query into a little interactive web page where you could pick a source version and

a destination version and maybe get some kind of list of compatibility information.

Well we definitely need to tweak it because that ran 10 seconds. We can't run that live.

Oh, it was a 10 second query?

Yeah, yeah.

Wow, okay. Yeah, that's terrible.

It was horrible. I just threw it together real quick and did it in horrible ways and

isn't shippable like that.

And I'm not even sure what that page would actually show

and whether it would be something that would be useful enough to make it a page on the site.

But it's maybe worth spending five minutes thinking about it.

Yeah, I mean, in the end, it's going to be fairly static, right?

That only changes with builds, so it's probably something that could be queried

every once in a while and then just in a materialized view or something.

Right, yeah.

persistent so you don't re-query every time.

Yeah, so there you go.

- Is it finally time for package share recommendation?

(laughing)

- I'm gonna let you go first now,

so you don't need to wait any longer.

What have you got?

- I'm so desperate.

So my first recommendation for this week

is a package called FOIL by Jesse Squires.

It's been around for a couple of years now,

but just at the end of last year, 23rd of December in fact,

they released a 4.0 milestone.

And so that popped it into my feed.

And it was an interesting package to read about.

So basically it's a property wrapper,

and it's for storing settings in user defaults.

So you can take a class

and you can add a wrapped default property wrapper

and give it a key to store in user defaults.

And the bit which was a very bold claim,

but I have no reason to doubt it,

is that it said you can wrap any type with this.

So that's, let me just double check that claim actually.

I saw it earlier, but I can't actually see it.

But I'm pretty sure it said you can wrap any type,

which is a bold claim.

- Also, codable is required.

- Oh, I'm guessing that's how it tackles the any class.

- Right, interesting.

- So there's a package called defaults

from Sindre Sorhus that I've used in applications.

In fact, I used that in the iOS DevJobs application recently

and that is also a great package for solving this problem.

But it's always nice to see a good use

of property wrappers in the wild.

- Yeah, that looks really interesting

because that's often a fiddly thing to deal with,

especially at a stage where, you know,

you just want to save something

and you don't have to deal with the details necessarily.

- One thing that made me,

makes me a little nervous of any of these packages

is the easier you make it

to just store random things in user defaults,

the more it will be tempting

to store random things in user defaults.

(laughs)

Whereas when you had to write every line of code,

you know, if you remember at one point,

you had to write your default

And then you had to call synchronize straight away for it to actually go and store.

And so that's, you know, in some ways there's an argument to say that if you make it too

easy to do this, you might end up with too much stuff that you don't expect in user defaults.

And I would still recommend that you have, you know, one location for all of your settings

that then go into user defaults rather than scattering them on every class and type that

you've got around your application.

And in fact, the example that's in the readme file here is for an app settings class that

then has three properties that are all wrapped and you don't have to write the code to actually

store it in user default.

So I think that's a good use of it.

There are also potentially ways that you could use this that might come back to bite you

later.

Nice.

Right.

So my package is actually a pair of packages

and it's, I'm sorry, it's frequent guests .3.

- I did notice that they had released a new package.

- Yeah, it's a new package that is a catnip for me.

Also comes with a blog post which we'll link to.

It's called Dependencies and it is a take,

it is their take on dependency injection.

And the reason I really like it is I'm actually using it already.

And it's a very lightweight way of setting up dependency injection.

I'm aware of a number of packages that deal with this and they often come with a bit of

setup ceremony in the sense that you need to instrument your stuff.

You need to typically define a protocol that extracts out the functionality you want to

inject and then you need to, you know, inject a default implementation that you want to,

or production implementation you want to use and then there's a mechanism to override that

in your tests. Point-free code's take is a bit different. It's also, it's sort of

modeled after the environment, the SwiftUI environment variable where you

sort of define the @Environment

annotation for a variable.

And you don't need to feed this into the type.

It's just, it sits there as a property.

And that alone,

the fact that it exists in that way

allows you then in your tests to actually,

that's an override point for you in your tests.

So it's very simple to set up.

There's no weaving of anything through any,

you know, initializers or any of the sort.

you just use it based on your properties

and the package comes with a number of built-in

dependency wrappers already set up for common scenarios like date,

for instance UUID, clocks and the sort.

For instance if you have this UUID thing set up,

so rather than call UUID initializer, you call that

in wherever you need a UUID, you call the property setup

with a dependency and then when you want to override it you can just inject

certain types of UADs they want to produce either, you know, hard-coded fixed ones or

incremental ones that auto-increment, stuff like that, and the same for date and clock and the like.

And the companion package that I wanted to mention is called Swift Dependencies

Additions by Thomas Graperon. And he's also, I think he's a previous guest in our package section.

he has a number of packages out working with the composable architecture.

And what he's doing is he's adding yet more customization points for other cases,

like for instance, application, bundle info, encode, logger, process info.

So what that means instead of using your application, you know, that static,

and you can use this one and then have an easy override point to, for instance,

if you want to test setting your app icon or stuff like that,

you know, stuff that lives on your application,

you can then mock and test

and even have mocked in your preview.

So it's a very powerful system,

very flexible and quite lightweight.

So yeah.

- And you said that you're using it already.

Are you using it in a package index or another project?

- No, this is in our internal.

So we have a little internal tool

that we use to view our build pipelines.

- Oh, it's pipelines, okay. - Yeah.

So they shipped this initially as a module

inside of the composable architecture.

And one of their recent updates transitioned

quite a few things inside the composable architecture

to a newer system

and also to their new take on dependencies.

But they've since pulled this out in a separate package

because it's not actually composable architecture specific.

It's not even SwiftUI or iOS and Mac app specific.

This is even more general than that.

We could use it in our server side app.

Of course, yeah.

It would be actually a nice thing to do.

I mean, we have no real reason to do it other than

it's a bit nicer than our current setup,

which is actually quite similar as well

because it's actually based on an older idea

by the Point3 co-hosts.

But this is a very, very generic way

of setting up dependencies and it's actually quite nice.

- Fantastic.

My next package is a package called JXKit

from, I presume this is a company,

it's Objective, the company,

and I wonder if that's a play on Objective-C.

(laughing)

I wonder if they've been around

since the Objective-C days,

but it's a package called JXKit,

And it's a module for interfacing with JavaScript core.

So as I understand it works on Mac iOS and it can show up on Linux as well.

Although it is getting a, an X on the compatibility matrix for that.

It even says it has experimental support for Windows and Android in the readme,

which is given that this is JavaScript core, that is, I'm surprised to see that

the Windows and Android are going to support it there.

But, um, effectively what you can do is you can open up a JX context, um, and you

can pass it some JavaScript and get it to evaluate that JavaScript.

And so that's, obviously there are lots of potential uses, uh, for this, but I

really liked how that this is, is, is quite an isolated use of JavaScript core.

So obviously you could spin up a web browser, you know, a web view or something

like that and have that execute JavaScript.

And of course, behind the scenes, that's using JavaScript core too, but this just takes JavaScript

core as a single thing and executes JavaScript for it for you.

Yeah, I had a quick look at the build fairly.

I believe this is a matter of it requiring OS level libraries on Linux.

I see WebKit core, WebKit GTK, something being mentioned.

probably a source package, a developed dev package that needs to be available on Linux

to compile that.

Yeah.

Actually, it's probably worth just taking a very brief detour around that issue because

I think we've actually changed the way we think about how we deal with that kind of

dependency on Linux.

So originally when we started getting requests from people to say that, I mean, the common

one was OpenSSL, you know, the package, a package needs access to OpenSSL.

It was there on the Mac builder machines.

It wasn't there on the Linux builder machines.

And we even implemented a way to bless different Docker images that we would then use for the

builds, if I remember correctly.

recently we've kind of maybe switched a little bit and have started to add more

of those dependencies into our default Linux builder image and that I think is

working well so far from what I've seen. Yeah, definitely. I mean initially we were

we were just using a plain Swift image with no additional dependencies

installed but there were a couple of frequent requests like you mentioned

OpenSSL and stuff. And there's no real, I mean, the only problem you could have

shipping dependencies built into the images if there are conflicting requests, right? I mean,

no one is harmed by OpenSSL being pre-installed on that Linux builder or Linux image that we're

using to build, unless you want a different version or some sort of package as a dependency

that conflicts with that. But even that we can deal with by then allowing that user to either

define their own image, which we would bless or by us preparing a variant of that image

with a different set of dependencies. So we've actually now built in a bit more

flexibility in our setup there, where we actually have a repository where we manage the additional

dependencies and people can just raise issues there to request additional dependencies to be

installed in our base images. So this is potentially something that we could either

work with the authors of the package or if it's obvious what it needs, we could potentially

just add it even ourselves, couldn't we?

Yeah, definitely.

It's actually, it would be interesting to see if this is easily fixable because I think

it's an interesting package to see and run.

I'm thinking a bit further than just JavaScript core because I wonder if this is WebGTK, I

I wonder if you could do snapshot testing on Linux with this.

You know, remember early on we had to sort of run the,

always disable our snapshot tests because our tests run on Linux.

We've since worked around this in other ways by using a third party service to do the snapshot tests.

But this might actually allow, not this package itself,

but a package based on this might allow snapshot testing on Linux.

if it's possible to build something that's similar to a webkit.

Uh-huh.

Interesting stuff.

So, do you want to kick off with your second one?

Yes, let's.

So my second package is probably fairly quick to talk about.

It's called Swift Number Kit.

It's by Matthias Zenger.

And it's a really nice package.

As its name implies, it's about numbers.

And in particular, it's about big int, like arbitrary position integers.

and rational numbers and complex numbers.

So if you have needs around these in your package,

this is what you can use to support them.

You know, you might have need for that sort of thing

that is beyond int64.

And obviously rational and complex

don't have any representation that I know of

in the currently in the standard library.

So that's really nice to use that.

And yeah, it's for Lames to have a extensive test suite

and it looks like an all-round nice package running on almost all packages.

WatchOS is failing, but that might just be something, you know, that's fixable.

But yeah.

Interesting.

Number Kit by Matthias Zenger.

So my final package for this week is by Brad Howes, and it's called Knob.

And it does exactly what you think it will do.

So it generates you a kind of rotary knob control

using core animation to draw the kind of circle around.

And it's not a full circle, at least in the demonstration

that's on the ReadMe file, it's not a full circle.

But it's a kind of 95% of a circle going from low to high,

like you would see on, for example, like a mixing desk

or a radio or something like that.

It's a common control for audio.

And in fact, the package has been extracted from his app,

which I believe is called Sound Fonts,

and the knobs in that app

control various audio effects settings.

And so he's extracted that into a package

and placed it on the package index.

This is not the kind of package

that every application should try and find a use for.

But I do find that these controls

can be a really effective way of doing certain settings.

Like it's effectively a slider wrapped around a curve.

- Yeah, yeah.

- So there are some places where sliders work really well

and there are some places where a knob would work better.

And so it's the kind of thing that I,

and so one of the reasons that this stood out to me

is that for a very long time ago now,

probably kind of iOS four or five days,

I had to build one of these controls

and it wasn't a huge task to build it,

but definitely it was something that took

a couple of days of work to get it right.

I got the basics up and running really quickly,

but then edge case after edge case after edge case

and you end up spending days on it.

And so I would have been very thankful

for a control like this.

Nice. Yeah, I can see that getting tricky.

Nice. Looks really nice.

All right. My third and final package is called

Swift Screenshot Scribbler by Christoph Göldner.

We actually have got a bit of a Swiss connection going on,

because Matthias Zenger is Swiss and so is Christoph Göldner.

And this is an interesting package.

it's a command line tool really, it's not a dependency you would use in your code directly,

but it's a command line tool which allows you to annotate screenshots, so effectively you

give it, you point it at a screenshot or an image and then it sort of generates a frame around it

and you can add text. I believe Fastlane tools has something similar like that, but this is a

a standalone tool that allows you to do that.

You can define, obviously, the string you want to attach,

the caption, the background color,

can give it gradients and stuff.

There's positioning you can do.

Really nice.

So if you have, I could see this being useful

if you have a process that produces screenshots,

perhaps automatically for your App Store assets,

and then you can run it through this tool

and generate all the captions and stuff

and do this in an automated way

that doesn't have you click around

and prepare these every time you make a release.

So yeah, Swift Screenshots Scribbler by Christoph Göldner.

- That's fantastic.

Six really interesting packages there.

So should we wrap it up for this week?

- Let's do it, yeah.

Thanks everyone.

- Thanks for listening, yeah.

- And we will see you in two weeks.

Absolutely, back on the bi-weekly schedule as of now.

So, I'll speak to you then.

Speak to you then, bye bye.

Creators and Guests

Dave Verwer
Host
Dave Verwer
Independent iOS developer, technical writer, author of @iOSDevWeekly, and creator of @SwiftPackages. He/him.
Sven A. Schmidt
Host
Sven A. Schmidt
Physicist & techie. CERN alumnus. Co-creator @SwiftPackages. Hummingbird app: https://t.co/2S9Y4ln53I
Episode 18
Broadcast by