46: A concept born and explored in recent decades

Are you following the Euro, by the way, the football?

I'm not a big football fan. I did watch the last half and the extra time and the penalties of the

Switzerland game, though, which was very exciting. And I'm not a football fan, but I believe we don't

have a great track record at penalties. I think we did exceptionally well in that Switzerland game.

I was going to say England winning on penalties is quite the occasion.

It's unheard of, right?

Yeah, practically. And we're set up for my adopted country, France, possibly going up against

England. Who knows? Although going past Spain...

Oh, really? Is that what's next?

Well, it's not next.

I have no idea what the schedule is.

France have to win against Spain, which is a tough one. And England will have to win against,

and I'm blanking who their next opponent is. So yeah, there's still steps to go.

But it might be a nice, interesting final.

No, I've been watching more Wimbledon than I have football. But I think all the British names are

even knocked out of Wimbledon by this point at this stage. So that's a shame.

Yeah, the French as well, I believe. Yeah, just the last one dropped out yesterday. And

I've been seeing a lot of rain in SW19.

Yes, yeah, there has been a lot. In fact, I think I was reading the other day that they were going to

consider reducing the length of some of the matches to try and fit it all in this week. So

I don't know whether that's gone ahead or not, but that would be, I think,

unprecedented if they do that.

But how would they do that? Like up to four games only or super tie breaks?

What's the solution there?

Yeah, I think tie breaks after two matches were on the table.

Oh, wow. Okay.

So I don't know what the, sorry, not matches, sorry, two sets.

Interesting. Well, let's see how it goes.

But I'm not sure what the conclusion of that was. But you didn't tune into this podcast to learn

about football or tennis, I hope, because you're in the wrong place if you have a football and

tennis talk. What we can talk about this week is two blog posts, actually. So we published,

first of all, a blog post on SWIFT.org about the Ready for SWIFT 6 project that we've been

working on this summer. And I won't go into huge detail about that because we've talked about that

at length on this podcast. But there is a blog post there that gives a bit of an overview about

what we're doing and how we're representing that data on the package index package pages and things

like that. And so that's up on the SWIFT Language blog, if you haven't yet checked out that page or

are interested in some of the work that went in behind the scenes on that.

And then the second blog post is one that we published just yesterday, which is about a new

expansion to our build infrastructure that we use to do all of our compatibility build testing

and documentation building and all of that. So we have, since the beginning,

used a mixture of machines because we have Mac machines, which you need for performing

the Apple platform builds, Mac OS, iOS and friends. And we run some Linux builds using

Docker on Microsoft Azure, which is kind of separate, but all part of the same build system.

And up until quite recently, we've had several Mac mini machines, just kind of bare metal,

manually configured Mac mini machines that we've been running on. And they have been,

I mean, they've been amazing. We started off with two Mac minis at the beginning, and

they ran the initial set of builds, plus hosted the entire website at that point, didn't they?

Yeah. And they were Intel Mac minis, right? Because that was actually quite a bit slower

compared to the first M1 Mac mini we got once those came out.

Yeah, we had the most overloaded web server ever when we first launched that build system,

because it was trying to serve the web page and constantly process builds in the background.

And over time, we have separated that out. The web server is no longer on

that infrastructure anymore. That's running in Azure too. And the Mac mini machines became

dedicated build machines that do nothing but build packages. And that was great. That was a

great step forward. But of course, there's a couple of restrictions with that. First of all,

it's all manually managed. So maintenance each week was making sure that the machines were still

stable, because running hundreds and thousands and tens of thousands of builds per week

on the same machines can be a source of instability. And making sure they're up to

date with updates and all that kind of maintenance stuff that you have to do.

But also, there were some limitations in that system. Because we were running each of those

compatibility builds on the same machine as other compatibility builds, there was no kind of

isolation between those environments. So we never hit insurmountable problems with this.

But it was always a worry that either someone would find some way to break out of the Swift

compiler sandbox and be able to do something to the Swift compiler that would disrupt or damage

the machine. And thankfully, we never had that situation either. But again, that was always a

worry. But secondly, just build environment isolation is a nice thing to have. To know that

your build is not going to be impacted by something that some other build did at some point.

So we had a really great system. And don't get me wrong, the system worked incredibly well

for years. But it was never ideal. And in a conversation where we were talking to Macstadium

about capacity, we ended up talking about their Orca system. So their Orca system is a

Kubernetes system for virtualizing macOS. So I think it's Orchestration...

What's the... Do you remember the abbreviation, the acronym?

>> For Kubernetes? >> No, for Orca.

>> Oh, the acronym. Oh, I'm sorry. No, I don't know. I don't know what Orca...

Right, that isn't actually just the whale, is it?

>> No, no. Well, I mean, it is a whale. It's VM Orchestration with Kubernetes on Apple.

So they have this orchestration system for automating the creation and deletion of

virtual machines on native Mac hardware in a kind of automated way. And so we ended up talking about

that system with them and whether we could switch to an ephemeral build system, which is

that every single build that we attempt gets a completely fresh VM, starts the VM up,

executes the build, and shuts the VM down at the end of it. And that gives us total

predictability because you're starting from the same exact place on every image every single time

because the changes are not persisted back to the image. And it also gives us complete isolation,

both from the impacts of other builds doing things to affect this build, but also of malicious builds.

They would have to not only break out of the Swift compiler, but they would have to break

out of a virtual machine as well, which I think is almost impossible. Never say anything security

wise is impossible, right? But I think it's as good as. Yeah, absolutely. And I think one huge

advantage, which we didn't have to do it often, but sometimes we had to run builds in the actual

environment. And that was always, you know, it meant to SSH into one of the builders, pick the

right one, check out stuff there. And, you know, you always sort of kind of worried, well, is this

going to be different because I'm now doing it in a different directory than normally?

You know, it's this here, we can just spin up the exact same copy of a machine,

connect to it and run the build just as it normally would. And you can have really high

confidence that you're actually looking at the same thing and debugging the same thing.

And that's just a big help. But I think if you ask me what's the biggest advantage is beyond,

you know, the maintenance that you already talked about, this debugging is actually our

capacity management, because we now have the situation where every machine in the cluster

can run any build. And previously, that was not the case. Every physical Mac that we had set up

had a specific macOS version, and then one or two Xcode versions. I think that's sort of the maximum

we ran with. But we sort of had to configure them in certain ways. And not every Xcode version can

run on every macOS version. So we had to be, we had to choose what we deploy. And we never had a

great mix where every Swift version was equally available across all machines. There was always

one machine that was handling, like two Swift versions, where the other two were only handling

one version. And you can imagine if lots of builds are coming in, that one machine will be struggling

while the other two are underutilized. And that's great right now, because as soon as something

shows up that needs building, any machine can participate. And not only that, it also means that

in case we need more capacity, it's really easy to just add it in general, and it'll benefit all

Swift versions. So we never need to worry about specifically setting up certain Swift versions

and their availability anymore. And that's just a huge relief. And you see the benefit as we

right now do our Swift 6 reprocessing. So over the summer, we're continuously hitting the system

with lots and lots of builds of a very specific Swift version. We can just run full throttle,

and we don't need to tweak anything about our setup. Like previously, we put up a couple of

extra Swift 6 builders to help us deal with that immense backlog as soon as the new Swift 6 version

came out. Now we don't have to, we just have everything available and we can just run with it.

And that's just a huge relief. And that situation was quite extreme at times. Sometimes we needed

three different host operating systems to run four different versions of Xcode. So at one point,

we were running Monterey, Aventura, and Sonoma, with Monterey running one Xcode version, Sonoma

running one Xcode version, and Aventura running two. I think that was when we were running five,

six and above. So yeah, it was never even load. And now as part of this Orca system, we get to,

on a job by job basis, say, let's spawn a Swift 5.7 build system, or let's spawn a Swift 5.10

build system. And it comes up in a remarkably short amount of time. I did a little testing

for the blog post as I was writing it. And the average time to spawn one of these VMs is three

seconds, or in fact, under three seconds, it was like 2.8, 2.9 seconds, which I mean, I know it's

not actually booting the machine, it's all saved state and it's just restoring that state into

memory. But still, that is an impressive amount of time to have a fully usable, not container,

but virtual machine running macOS. Yeah, I mean, I think they're sort of very similar because,

you know, on macOS, if you run Docker, you're actually running a virtual machine.

On Linux, it's a bit different, but just in terms of how you use it, if you're not familiar with

Kubernetes and Orca is sort of a flavor of Kubernetes, I've not actually used pure,

if you will, Kubernetes. But you can think of it just as if you were running a, or spinning up a

Docker container here. That's effectively what we're doing, except it's Mac machines that are

coming up and all the rest really behaves pretty much like that. And as my understanding is also

like the Orca commands you run and the output you get is effectively like Kubernetes. So if you're

familiar with Kubernetes, Orca will be really familiar to you, I believe. It's a really nice

system. And we should talk a little bit about the hardware that's running as well, because the

Mac minis that we had previously have had a little upgrade. And we're now running on...

They've bucked up, yeah.

Yeah, they certainly have in style too. We're now running on Mac Studio machines. So we're

actually running M1 Mac Studio machines, but with M1 Ultra chips in them and 128 gig of RAM each.

So what that actually means is because there is a limitation on all virtualization on Mac,

and that is that Apple have a limitation of no more than two virtual instances of Mac OS

running on a machine simultaneously. So what we're doing is we're splitting those Mac Studio

machines straight down the middle with 10 cores per VM and 64 gig of memory per VM, which is still

an enormous amount for a Swift compiler package. Even the largest packages are not going to run

out of memory on our new VM based system. So yeah, and we have eight of those Mac Studio machines

for a total of a terabyte of memory, which is kind of mind-blowing. And I think 160 cores

of CPU processing, which gives us a total concurrency of 16 builds running at the same time.

Yeah. And that's, as I said, the real shame that we can't boost that up a little higher.

Yes. If anyone from Apple is listening, please drop us a kind word with whoever is responsible

for that saying that we would really appreciate it because it's actually, it's worth talking

about that for a second. We are able to, especially when we're doing these Swift 6

processing runs where we fill up the backlog with a hundred thousand jobs ready to process,

we are able to saturate our cluster with work to do for more than seven days of processing time.

You know, we queue up a hundred thousand jobs and it takes about seven days to clear that backlog.

And we had a look at the underlying CPU usage during that seven day period. And the underlying

CPU usage was averaging out between 30 and 40% across the entire Mac Studio cluster.

If we were able to spawn more VMs within our cluster, like more VMs per host,

then we could even that out and drastically make our build environment more efficient.

Because the build process has several stages. The build process is a Git checkout and then a Swift

compile, and then a documentation build, possibly if your package author opts in, and then uploading

that documentation to S3 and then reporting back results before kind of finishing the build.

And there are periods in that build process where the CPU is not very busy. So during the clone and

during the documentation upload at the end, the CPU is effectively sitting idle. If we could have

more virtual machines per node, we could even that CPU out significantly on the hosts and get more

work done with the same amount of hardware, which would be something that we'd love to do. So I'm

fairly confident the person responsible for that decision is not listening, but just in case they

are. - Send us the secret user defaults command to disable that limit. - Or send us the secret user

defaults command, yeah. I'm sure if you disable system integrity, whatever, you could probably

find a way to make it work, but that is not what we're doing. We are faithfully following the

license agreement that we agreed to. So the good news is that we've been doing

quite a bit of testing with this system before we put it live. And we ran a lot of the early Swift6,

ready for Swift6 build queues with this cluster before we switched over our production workload

onto that as well. And it has just been an absolute dream to work with. There was a little

bit of software development that we needed to do to get it all working. We need to

write a fairly small piece of software that orchestrates the cluster, basically. So it reads

incoming jobs that it needs to process, decides which VM type to spawn, spawns the VM,

SSHs into it, executes the build commands, and then at the end shuts that machine down

and reallocates the resources back to the cluster. But apart from that, it's been fairly

easy. I don't think we needed in the end any changes to our builder tool. Is that right,

Sven? No, in fact, what made it a bit more complicated, I think, was that for a time,

we actually ran both because we always had builds coming in. So it was really hard. It wasn't really

practical to say, well, we're not going to process builds for a few days while we sort this out.

So the way we set this up, that we actually ran both systems at the same time. And you can imagine

that's a bit more complicated because it's a very different way of bringing up jobs,

because one machine has a GitLab runner permanently running that picks up the jobs,

and the runner knows exactly what machine it's on. In the other case, the machine is ephemeral,

so there's nothing running to pick up jobs. So something else needs to do that. And we sort of

had to move the logic that sits in the runner into this orchestrator and driver. And at the

same time, not pull out the old one. So we needed to do both things at the same time while we were

transitioning over. And that was the fiddly bit that also worked out quite well. So after we

figured out how to do that, that really allowed us to take our time in transitioning over and making

sure it works as we expected. Because in a system like that, you don't really want to go back and

forth, right? You don't want to transition too fast and then realize, "Whoa, whoa, whoa, hold on,

this isn't actually working." And there were a couple of cases where it did not quite work out

all right in our first attempt. And we needed to sort of stop processing for a bit and redo some

things. This was mainly about our stuff where we didn't report the results back properly or the

queues didn't see the build failure. So they thought that we're actually okay. And stuff like

little fiddly bits that get wrong when you... I mean, everyone who's ever set up a CI system knows

exactly what I'm talking about, right? You think you have it running, then you send the job to CI,

everything's green. And then you look at the logs and you see, "Oh, it actually failed." And you

didn't catch the last build result and that sort of stuff. And when you run lots of builds,

you kind of don't want to get that wrong. And that was the fiddly part.

- Not content with setting up our own CI for our tests, we decided to set up CI for the entire

package ecosystem. - Yeah, well, actually, we're not,

we're just running builds. I mean, we're never going to run tests. I'm going to say that here

now, but just the builds is fun sometimes. - Exactly. So I think it's

a great upgrade of our both capability and also the kind of stability and isolation of our build

environment. I sleep completely at ease these days with knowing that there's zero risk of

waking up to a build machine which has been compromised.

- Yeah, that's actually a good final point maybe to raise because one of the issues we did

occasionally see that was a problem with the old system was when one of our builders fell over and

it was a specific Swift version that was only served by one builder. It could actually happen

that our queue filled up with those builds and then we wouldn't actually continue processing.

And that's actually something that can't happen anymore. And that's, I think, the key thing that

we're actually gaining operationally apart from all the maintenance stuff and debugging

that really, you know, a couple of times caused trouble with the old system.

- So we'd like to say a huge thank you to Max Stadium. They have been

supporting this project since the beginning. And if they have a very generous open source

program actually, which is available to any open source project, which is if you require a

Mac mini or something like that for your project, if GitHub Actions are no longer

cutting it for what you need to do in terms of CI, they offer an open source program to provide

that kind of resource for free. And we'll put a link in the show notes to that. But also, obviously,

it's a little bit of a different situation having a few Mac minis to eight Mac studios,

but they continue to support us by making this possible. So, you know, not to go into the full

details of it, but this would not be possible without their support. So thank you so much for

everything you've done for us. And more importantly than the supply of the resources is

every single conversation we've had with them has been met with, well, let's figure out a way to do

this. So when we started talking about ephemeral builds, it wasn't a case of, well, are we actually

going to be able to keep this within a sensible budget, but just how are we going to do it? Is it

going to be the best solution? And I've kind of worked from there. So I can't say enough good

stuff about actually both Mac Stadium and Azure, they are both exceptionally accommodating with

every request that we've made to them. Yeah, absolutely. Huge, huge. Thank you.

All right.

Right. There's another piece of news while we were talking about the build system. Maybe that's

something worth highlighting right now. Because this week we crossed a nice threshold and that's

10% of all packages are now opting into host documentation on the Swift package index. And I,

I was sort of tracking this number. So I saw it coming. There's been a really interesting

increase recently in a number of packages opting in like new packages, the ratio is quite high.

Well, higher than 10%. I mean, that's obviously how we're catching up starting from zero. But I

sort of was thinking back when we rolled out documentation hosting, I thought, well, you know,

getting a couple of percent would be nice, you know, like maybe a couple of hundred packages.

And now we're, we've crossed the 750 mark and the 10% mark. That's actually quite extraordinary how

many authors are actively opting in because this is not something we are doing right. Every package

author needs to add this SPI.yaml file to their repository. I mean, it's not a huge change,

but it is something you actually need to be doing and be looking at.

It takes a git commit.

Yeah. That's the thing. Yeah.

Anything that requires any kind of git commit is asking the package author to do something

that they probably weren't expecting to do. So no, it is not an onerous procedure by any means,

but it is something that takes action rather than a passive process.

Yeah. Yeah, absolutely.

One thing that I was curious about when you told me this stat during the week was

whether it has changed recently and in terms of, because obviously when we first launched

the package index, we didn't have documentation hosting. And so I have a little quiz question

for you based on the fact of we added into our package, our add a package process, a little

call to action at the end, which is to let package authors know that they can host their documentation

with us and link to the documentation that we have on how to do that. And so what I did is I

did a little filtered query to say what the percentage of packages that have adopted our

documentation system in the last year was. What would you say the percentage of packages, given

it's 10% overall, if it's packaged in the last year, what do you think we're at?

Well, I'm slightly cheating because I do track the number of new packages, you know, that number.

So I, but you know, I, that's this from memory, I'd say maybe 25%.

No, it's not quite that. It's 16, just over 16%. There are 306 packages in the last year

that have added documentation as part of it. Although that number is slightly skewed,

I think skewed lower because of the way that our rename process works. So we track when people

rename their packages within our organization. And we also track moving packages from one

organization to another on GitHub. And every time we detect a rename, that actually shows up in our

kind of stats as a removal and an addition of a package. And so there are potentially some

double counted packages there because of that process. But I don't think that would,

I don't think it's enough to be statistically significant.

Yeah, I'm certainly probably biased because just the recent, like recent weeks, it was 30%.

It's been really high, the ratio of adoption. I'm not sure why that is. Maybe it's

WWDC was a documentation thing or something, or maybe I don't know, but it's been noticeable,

the uptick in the ratio.

I think it's, I love the fact that we're so high on hosted documentation. I think that's,

it's great to see. And it's great to see so many package authors putting the time and effort into

creating documented packages.

Yeah, absolutely. Yeah.

Let's do some package recommendations, shall we?

Let's go.

Do you want to start us off?

All right, I shall do so. My first package is a package I absolutely love. It's called

Swift Testing Revolutionary by Koki Miki. And I just love this. When I first saw Swift Testing,

I was really excited about this Swift Testing package. And we've now seen the introduction

at WWDC. I just love testing. I mean, I spend a lot of time in tests, so anything in that area

really gets me excited. But I also knew looking at our test suite, just in the main project,

we have 766 tests. And obviously, that's just tests. That's not even just assertions. That's

way more. I thought, well, that's going to be a while until we actually use that extensively,

right? Because you're not going to dive in and adopt that, cross the board and make all those

changes. But then I saw this amazing tool last week. It was announced on the Swift forums.

And it's a tool, like a build tool that you can add to your package, like the new build tool thing

that you can trigger in Xcode or via the Swift Package Manager. Or you can use it as a command

line utility. And what it does, it rewrites your tests to the new Swift testing syntax.

And what that means is it adds the @Suite annotation, converts your test class to a struct,

or optionally leaves it as a class. You can configure that. It adds the @Test annotations

to each of your test functions. And it converts your XCT test assert calls to the new #expect

expression. And that is an amazing time saver. And I actually tried this out in our repository

with the 766 tests. So this is like, it's so fast. At first I thought, well, it probably didn't do

anything because it just finished in like less than half a second. And it rewrote all the tests.

Now in our test suite, it isn't working perfectly. And I'll explain why in a second. But in

a plain case where we were using an XCTest case as the base class, it converted the tests perfectly.

And they ran and they passed. And it was excellent. It was just no notes. It just worked.

The problem it has is with cases where we use a different base class. And that's where we have

snapshot tests or database tests where we inherit from our own class and then use

objects of that class in the tests. And because it converts that, and I didn't fiddle with any,

I just used the default setting and converted it to a struct. So it converted it to a struct.

And then obviously that lost all the inheritance and the tests didn't compile because they didn't

know about the DB or the app properties. It just didn't exist on the struct. But that's

really simple to fix. Perhaps even just by using that other mechanism and making it a class with

an absolute annotation or just fixing it up. I mean, that's trivial. The main thing is that

rewrites all the tests and adds the @test annotation and it converts all your asserts.

And that's the big thing because that's hundreds and hundreds of lines of changes

that we just save. It's amazing. Yeah. This is not a tool that you're going to need to run

as part of a build process or anything that's going to need to be, that the output is going

to need to be perfect for fixing up a few edge cases after the tool has run. It's a perfectly

acceptable way of running this tool. And because once you've run it, you've run it and that's it,

it's converted, it's done. So yeah, what a great tool. I hadn't seen this. So that was fascinating

hearing about that. Great. Yeah. It's really nice. And obviously you can do it file by file. So it's

like, I'm really looking forward to this because I really want to use some of the new stuff.

And obviously that's all predicated by actually moving over. And if this allows us to move us

over fast, that's super exciting because it'll open up Swift testing much, much sooner than I

would have thought it would. I must admit, I had wondered whether we would ever actually migrate

to it because of the fact that we have so many tests already. I thought maybe we would approach

it in the case of, well, if you're writing new tests, then you write them in Swift testing.

But I'm very happy to hear that that won't be necessary. Yeah. Well, I hope we'll see what

happens, but I'm really, just this short test showed me it's probably quite viable. I'm not,

yeah, I'm just really hopeful. Did you check that the result of the conversion process was not to

XCT assert true to make sure all the tests pass? No, it's actually quite remarkable because like,

yeah, I know you're joking, but like, for instance, it rewrites an XCTest assert where

equals where you have A comma B, it converts that into a hash expect A equals equals B. So it, I

mean, obviously uses Swift syntax to actually understand your test syntax and intelligently

rewrites that. So you get a lot of mileage out of this. Just if all it did was rewrite the

assert macros, the exert, you know, well, they're actually macros in the

original Objective-C, I think in Swift their functions, but these, I still, in my head,

I still think of them like as macros. They, it's just, if it just rewrote those, that would already

be a huge time-saver because that's obviously the most lines in that diff is the asserts and then

tests, you know, the individual tests. And then I think we have like maybe 15

test classes. So if, even if it didn't, those that would be trivial, right? 15 things you can,

you can manually convert it's the others that are important and those, those seem to be coming

across really well. Right. So yeah, big thumbs up. So from a brand new package like that one

to my first recommendation this week, which is to, for a package that has been around for seven

years. And it's actually a package we use in the Swift package index. And it's by far not a new

package, but it is one that received a major version upgrade this week. And it is Soto by,

well, the Soto project is the official organization name, but it's primarily written by Adam Fowler.

And I was looking through packages and there were actually a huge number of new packages this week.

But this one, the major version struck me because it is a full conversion to away from event loops

and into a Swift concurrency. So that's a great step forward for the package. But I should also,

I should also explain what the package does for those people not familiar with it.

It is a Swift language SDK for all of the Amazon web services APIs. So Amazon are constantly adding

and changing their APIs, I'm sure. And this effectively is guaranteed to always be up to

date with those because it is kind of generated from their API definitions. So this is very

unlikely to be used from like an iPhone app or anything like that, or a watch OS app. I don't,

I don't start up EC2 instances from my watch very often. But it's certainly if you're doing

any kind of server side Swift, then this will likely be already in your project. And so it's

worth a note when it does something significant, like upgrade itself completely from event loops to

Swift concurrency. And I just thought that I looked through, I was actually kind of surprised

that in the 40 something episodes of this podcast so far that we hadn't yet talked about Soton.

And I thought it was time. Oh, we haven't.

That's, that's interesting. I checked the show notes and it's not there yet. Have you

also checked its old name because there was a rename it was called something AWS

kit or something before. Oh, I don't know whether I did. I don't quite recall its original name,

but obviously it was called something AWS and that that needed to be changed when AWS introduced

their own. We have nothing in our, in our list of packages that we've talked about on the podcast

that mentions either AWS or Soto. So remarkably I don't know how we haven't talked about it because

it is, it is one of those packages that I thought would definitely be on this list, but it's, but

it's not anyway, worth a mention Soto by Adam Fowler for all of your AWS needs.

So my second pick is called Swift sessions by Alessio Rubicini. And that's also a package I

saw on the Swift forums. And Alessio also points to an introductory blog post, which explains this

package in a little more detail. And there was a really nice line in that, in that blog post,

it says, so this package is about Swift sessions. It's about session, so-called session types,

which is apparently like a computer science concept. And he writes, this is a concept.

If you've never heard about session types, that's normal. This is a concept born and explored

in recent decades. This is where I expected like weeks or months, but he says decades. So

we, I certainly missed it in recent decades. I'd never heard about session types.

It's new to me too.

Well, there you go. We, we kind of missed it in recent decades. The, the blog post does a really

nice job explaining what it is. Nonetheless, it is, it seems like a complicated topic, but in a

nutshell, what it does, it is using type checking to, to assert or implement a message exchange and

make sure that the bits line up like the parts line up. So imagine you have a side A, and this

is sort of taken from the blog post. You have a side A that sends an integer across the wire,

then expects to get a bool back, and then the communication ends. So that's side A, sending int,

getting bool, and then end. And side B has, has the, the message receiving an int,

sending a bool, and then it ends. And you can see how these line up, right? Sort of like,

you know, imagine like saw teeth in a, in a gear cogs and they sort of, you know, you have the

exact right alignment of sending, receiving int, sending and receiving bool, and then

both expect to end. And this is something that, that this package allows you to encode in the

type system. So at compile time, it'll be ensured that your messages line up and, and, you know,

the, the orders also, and not just the types line up, but the, the order, the sequence order of your

message exchange is, is asserted at compile time. And that, that seems like a really nice

guarantee to have, you know, and this package makes that possible. One note, if you use this,

you, you probably want to make heavy use of type aliases and use type inference, because

otherwise you will get hurt by all the pointy angle brackets in the type signatures. It has lots

of, lots of those going on, as you might imagine. But it's one of these cases where the type system

really opens up opportunities or possibilities to use the compiler to check things for you that,

that weren't possible before. And I find that really interesting to see these kind of new

packages pop up that, that make use of these things. That's great. So there you go. That's

Swift Sessions by Alessio Rubicini. My final package for this time is a package called Tabula

by António Pedro Marques. And this is, well, it's a package. Let me tell you the stuff. Let me start

with what the package does, and then I'll tell you why I picked it. The package is a spreadsheet

reader and writer. So you can combine it with another package called CoreXLSX by a core office

organization, which actually looks from the author information, it looks like it was written by

Max Desiatov, who is now at Apple and who, but before that had his fingers in all sorts of open

source packages. And the combination of them allow you to read and write data from spreadsheets.

So that in itself is the kind of thing, this is, you know, it's fairly niche

things I want to do. Not every application is going to want to read and write

spreadsheet files or Excel files. But it reminded me of a project that is both probably the very

worst code I ever wrote as a developer, but also in its own way, the best code that I ever wrote

as a developer. So this is way back in the 16-bit era when I was working for a software company that

wrote HR software, so job evaluation and kind of pay modeling software and that kind of thing.

And we had this job evaluation system where you would put definitions of roles into this system.

And then as a team of management consultants that we had in the company would work out within that

business, what were the important aspects of that role that made it more responsible and made it

effectively a ranking of jobs from low to high across an organization. And it was used in lots

of equal pay disputes and things like that, where you could prove that the job requires this rather

than the person doing it requiring something. So it was a great piece of software, but the

management consultants who came up with the rules on a company by company basis determined what was

important in that company. What they did was they loved using Excel spreadsheets to basically make

these rules. And what we would then have to do is take those Excel spreadsheets and convert them into

code. And that was okay. I mean, it was never terrible work, but it was fairly tedious work.

And it was also prone to mistakes. So when you're reading from an Excel spreadsheet with an

extremely complicated formula behind it, the potential for errors there is very high.

So we did lots of testing to make sure we got it right and that kind of stuff. And

when we were moving this to work on the web, because originally this was just desktop software,

we came up with this idea that actually maybe we should just open the Excel spreadsheets,

poke a value in, let Excel do the calculation and read the value out of the other side of it

every time we needed to do these calculations. So one of the first pieces of web software that

I ever wrote was a 16-bit Visual Basic COM component, which was a component object model,

I think it was, that allowed Visual Basic to interact with ASP, not ASP.NET, just ASP,

which was the original Microsoft scripting language, very primitive web scripting language

that was, but it could call out to a COM component that then executed a command line

16-bit Delphi application that could read and write the Excel spreadsheet. So it would poke

a number into the Excel spreadsheet, read the number out back out of the Excel spreadsheet and

leave a file on disk that the COM component then picked up and sent back to the database and stored

in the database. So you too can build your own Excel Rube Goldberg machine with Tabular.

- And that's how it was done back in those days.

- I mean, that was it. And what was incredible was you'd look at the processes running on this

web server and to see, you know, 14 different copies of Excel all in memory doing calculations

as the web server fielded requests. And the reason I say it was both the worst,

I think it's quite obvious why I'm saying it's the worst bit of software I ever wrote.

But the reason it was the best bit of software I ever wrote was because it worked flawlessly

for years and there was never a problem with it. It never got it wrong, it just worked perfectly.

- Exactly.

- And we can't ask for much more than that, right?

- It is sometimes extraordinary to what lengths organizations go to, especially banks. I worked

at a few banks. They'll get a system running once and then they'll go to extraordinary lengths to

keep that system running, no matter the cost. They'll just build anything around it to just

keep that one system going that managed to get this thing running in the first place and just

keep chugging away with it. It's sometimes quite remarkable.

- Exactly. But yeah, I mean, it was great. And who knows, maybe it's still running today.

I don't think it is, but who knows? So I think we should probably wrap it up there, I think.

And we will be back in three weeks with another episode with some more packages for you to have

a look at and news from the Swift Package Index. But until then, I will say see you in a few weeks.

See you next time. Bye bye.

Creators and Guests

Dave Verwer
Host
Dave Verwer
Independent iOS developer, technical writer, author of @iOSDevWeekly, and creator of @SwiftPackages. He/him.
Sven A. Schmidt
Host
Sven A. Schmidt
Physicist & techie. CERN alumnus. Co-creator @SwiftPackages. Hummingbird app: https://t.co/2S9Y4ln53I
46: A concept born and explored in recent decades
Broadcast by