58: People used to hand code assembly

I think the first thing I want to mention in this episode is I booked my ticket to the

Server-Side Swift conference. Did you book a ticket?

I did in fact book it. I booked the ticket, booked the hotel. The only thing I haven't

booked yet is the flight. So looks like we'll finally meet. Fingers crossed.

You're well ahead of me then because I haven't yet booked the hotel, but I do have a ticket.

And yes, it will be nice to both be in the same place at the same time because

you went last year and unfortunately I was not so well last year when the conference was on,

and so I couldn't attend. And then the year before I was there, but you couldn't attend

the year before. So we've nearly met at the conference twice now, but this year, this is the

one. This is the one, exactly.

Only took us five years.

It only took us five years, exactly. But just for anyone who is interested, it's worth mentioning

the details of the conference. Early bird tickets are still available. And if you have any interest

in Server-Side Swift, which if you're listening to this podcast, you probably do, then the

Server-Side Swift conference is 2nd to the 3rd of October, 2025 in London. And like I say, early

bird tickets are still available. There's no speakers or schedule up yet. In fact, I think

there's still an active CFP for the conference. So maybe you could even speak at the conference

if you're listening to this. Yes, the CFP still appears to be open until June the 30th. There we

go. So you may, if you have something interesting to talk about at the conference, submit yourself

a CFP, but if not, then consider a ticket if you've interest in Server-Side Swift.

Yeah. And if you're, as Dave said, if you're interested in Server-Side Swift development at

all, just get a ticket. You don't need to wait for the speaker list because the speakers will

be great. It was a great conference last year and the talks are, you know, they're certainly

very important, but also everything around the conference is really great. There's a lot of

time to catch up, socialize, meet people. I found it really great. Lots of people I'd only

interacted with online that I had a chance to meet last year. And the few that I didn't

manage to bump in, I hope to run into this year. And yeah, same for everyone else. I think it's a

great venue too. We should mention this is happening at the Science Museum in London.

So even if you don't want to talk to anyone at all, you can just roam the halls and look at the

Lunar Lander and stuff like that. Look at old airplanes and stuff, but you should talk to people.

Yes, you should. It's definitely, that's the best bit of any conference. The year before last,

I actually did a small presentation at that conference and the venue is fantastic. Of course,

the Science Museum is wonderful, but it was in the, or at the time when it was two years ago,

it was in the IMAX theater there. And so my slides were projected onto an IMAX screen,

which is the first time I've ever had that happen. So that was fantastic.

Yeah, it's a great venue.

You don't really know what big fonts are until you've had them projected on an IMAX screen.

So that's that. I think there's also a couple of little bits of news from

the project over the last couple of weeks. First of all, we shipped two new compatibility platforms

for our compatibility matrix on every package page. So there was a blog post and we'll link

the blog post in the show notes, but we have added both Wasm and Android to all of our

compatibility checks. And it's been kind of interesting to see the results of those platforms.

They are significantly more compatible already than you might expect. Now, obviously,

whenever you have a package that relies on any of the kind of Apple specific platform UI kit,

Swift UI, AppKit, any of those kinds of things, you're automatically going to fail compatibility

with both Linux and Wasm and Android. But actually, I was pleasantly surprised to see that

almost 19% of packages are already compatible with Wasm and almost 28% are compatible with

Android already, which is kind of remarkable.

Yeah, it's great to have those platforms in. It was really surprisingly easy to actually add them

thanks to the Swift cross-compile efforts. So there's a... I think this was introduced with

6.0, like the SDK support. And now with Swift 6.1, Android came on board as well. And it's really

like you install these SDKs alongside your tool chain and that allows you to compile to the target

platform. So I think what effectively it does is sort of the SDK is a bundle of artifacts you need

to cross-compile like headers and additional libraries and stuff. And then you can run on

your host platform and compile to the target platform. And because we've already been running

Linux builds, it really meant just creating almost like a variant of the Linux build in both cases,

because the builders are Linux-based in those cases, and you install the additional SDKs and

you just compile to Wasm and Android. And yeah, it took some testing and stuff, but it was easy to

add. And like every year, I just hope that at some point we'll even get to something similarly easy

with the Apple platforms, because we still have to wrestle whole Mac virtual machines for all of

those platforms. If those SDKs also had something like that, that would be amazing and make it much

easier to run these compatibility builds across the board. The big one, of course, that's still

missing is Windows. And that is something that we would both love to have on that compatibility

matrix, which would take our number of platforms from eight to nine. But we do still have a couple

of technical issues. The software that we've built to run the builds that we use for compatibility

testing has a couple of dependencies that are not yet compatible with Windows. And so that would be

quite a lot of work to work around those dependencies. But also, we would need to flesh

out an entire Windows build infrastructure, which is something we don't have. And unfortunately,

we're on the verge of losing our Microsoft Azure sponsorship. So it's also not something that we're

able to kind of put a whole load more resources behind in terms of that build infrastructure

right now. So it's definitely on our minds. We'd love to do it, but it's not in our short-term

future, I don't think. Yeah. Yeah. Unfortunately not. It's interesting how often it actually comes

up. Windows support is being requested quite frequently. The main dependency we're lacking

is Swift Neo. And I do know that there's interest in actually porting that. I know the Vapor folks

are sort of interested in that as well. They have lots of requests, surprisingly,

for Vapor on Windows. And that's obviously a dependency for them as well. So I'm confident

that'll see changes at some point. It can't be too far off. There's a lot of interest in Windows,

and I think we'll get it sooner rather than later. Fingers crossed.

Yeah, absolutely. And then, so we've actually published two blog posts in the last two weeks,

which the second one was a little celebration of our five-year anniversary of the launch of

the Swift Package Index. So we launched just before WWDC 2020, which if you remember,

was actually a little delayed in 2020 because I think there was some global event happening

or something like that, but I forget now. So WWDC was a little delayed, but we launched just before

the conference in 2020. And we published a little blog post with some statistics and some

kind quotes that people gave us about the Package Index with a couple of graphs showing how package

growth has happened over the last five years and how much documentation we host and how many builds

we do and all sorts of stuff. So again, I'll include a link in the show notes to that blog post.

Yeah. Yeah. And on that occasion, thank you to everyone who's supporting and using the project.

It's just amazing to see how much it's grown. We started with, was it 2,500 packages? It's 9,000

now. I mean, of course, it's not a lot if you compare it to other ecosystems, but it's a lot

of growth in those five years, has a lot of really interesting and fun packages and helps. I use it a

lot, actually, the site itself to find stuff and not just because of the Swift Package Index, but

just of work in other aspects of the project itself, but also sometimes when I try stuff,

it's a site I use often. And interestingly for me, use has shifted to looking at the documentation

pages much more. I think if I had to say what I use the site for has dramatically shifted towards

documentation. And I think that that's really a testament to the efforts by the community to

actually add documentation to their packages. I'm constantly amazed how high the ratio is of

new packages that are being added with documentation opt-in and the quality of

the documentation in particular. There's lots of packages that have good documentation and some

have amazing documentation with tutorials and migration guides. It's fantastic work being done

there. So yeah, thank you, everyone. Yeah, absolutely. Thank you so much for

not only using it, but the people who also sponsor us through GitHub. That's incredibly

important to this project being as successful and available as it is. So it's been a fantastic

five years of running it. So thank you very much. Here's to the next five. And then some.

Exactly. Yeah, exactly. Yep. All right. I think there's another big thing happening this week.

You just mentioned it like five years ago. Yeah, five years ago, we launched just before

WDC. The actual five year anniversary is probably exactly on the day this podcast episode will come

out, which is in the middle of WWDC 2025. And yeah, we've had an interesting keynote yesterday

on State of the Union, I think. Did you sacrifice any of your devices yet on the beta alter?

Or did you? No, not yet. So we should mention that we're recording this on the Tuesday. So that's,

yeah, as of today, we've only seen the keynote and the State of the Union. Although I did notice

that it appears that every session video has been released yesterday. So I have a feeling.

Yeah, I have a feeling that unlike previous years, whether you a few every day throughout the week,

I think they've just dropped the entire set this year, which is great. But the only ones I've

watched so far are the keynote and the State of the Union. And I haven't yet sacrificed any of

my devices. The only thing I've done so far is install Xcode 26. Yeah, which is, well, actually,

before we move on to what we thought of the thing, I read a blog post today by Daniel Seidey,

who made the case as to why the unified OS versioning is a huge advantage to us as developers.

And it's basically around all the availability checks for things like SwiftUI, where you might

say before you declare a function or before you use a function, you might say when available on

iOS 18, Mac OS 15, TV OS 18, watchOS 11 and Vision OS 2, you now will presumably just be

able to say, yeah, you can now say, please check for OS 26. And that covers everything now, which

I think that's one practical advantage for developers of the naming system changing.

It will take us a little time to get used to that naming system change. But I think

it's a good thing because generally, you do want to, when you're doing those availability checks,

you do want to set everything with every platform at a certain year. So I quite liked that. And I'll

include a link to that blog post in the show notes again. Yeah, no, I think that makes perfect sense,

the change to years. The 2026 is a bit, it's not 2026 yet. I can see how they want to, not when it

probably hits the mainstream, it's going to be 2026. But then I'd argue you should have probably

also called it WWDC 2026, right? No, I think this makes sense to me.

WWDC is very firmly in 2025. But the operating systems 26 don't actually release until September

and October this year, at which point we are much closer to 2026 than we are to 2025. And so I think

it does make sense that the majority of the time that iOS 26 will be available, not in beta, will

be in the year 2026. Yeah, I mean, this is a lot like, you know, $9.99 pricing. It's like, it's

targeting psychology more than anything else. I would have made the same decision given, given,

if it was if it were a June release date, I would probably go with 2025. But given that we get access

to it way in advance of the people who actually going to eventually use it. I think I think it's

the right decision given it's September and October when they normally ship. All right.

All right. I thought the conference was good. Yeah. The State of the Union was interesting.

I actually I didn't quite make it to the State of the Union last night, but I watched it today.

Interesting changes to lots of the frameworks. We've got the new changes to Xcode, which are

good. One that I didn't expect was the Swift Assist feature to be in this year's version. I

thought that might be a little further delayed. And also for it not to rely only on the Apple

model for prediction. And actually it's more than prediction. It is.

It's not quite an agent, but it is a chat that you can have with an LLM about your code that

will also write code for you. But you can go backwards and forwards in that chat. It's

definitely more than code completion. Yeah. And I didn't actually see that called out. Is that

the Swift Assist equivalent? Because I don't think they mentioned Swift Assist.

I think no, I don't think. I think the words Swift Assist were very deliberately not mentioned

yesterday. At least I didn't hear those two words said at all. But from what was previewed as Swift

Assist the year before, this feels like an evolution of what that was.

Yeah. Yeah, definitely. It's great to see that they have the on-device models and stuff and

that you can very easily interface with them. I played around with it a bit. Foundation models,

it's super easy to use. I'd be interested to see the quality of it and what it can actually do.

Again, I just did a trivial thing and it's hard to judge them from just a couple of things. You

really need to gain experience with those to be able to judge them. But you can see how

in the future with more advancements, this is a really interesting and compelling

thing to have that you don't need to reach out. You don't need to fiddle with API keys,

that it runs on device without a network connection, without shipping your data off.

That's just massive benefits. And if it has quality that's in the ballpark, I think

that gives it a lot of weight. Even if it's not maybe the most cutting edge model, I think there's

a case to be made that it's a trade-off you might want to make in those cases. Plus,

you can still do the other thing if you really feel like you need the extra oomph, if it exists.

I'm much more excited about the coding support than the foundation models, just simply because

that's something I've been interested in for a little while in Swift coding and in non-Swift

coding, actually. I did a little experiment at the beginning of... Oh, no, it was the end of last

week, I think, that I did this. Are you familiar with Claude Code?

- Yes.

- Okay. So, Claude Code, if you're not familiar with it, is what they call an agent for writing

code with one of these LLMs, in this case, Claude. And I was curious to try something with it.

So, I actually got it to write me a Swift package this week, and that Swift package, I think,

well, it is in the Swift package index as we speak. And what I did was I spent about an hour

writing and refining with the help of some LLMs to help me do that, writing and refining a

quite detailed specification for the package. And then using a technique that I learned from

Peter Steinberger, who has been doing a lot of work with these agents at the moment,

I found it very interesting to basically pass this specification between two LLMs. So, I took it from

the one LLM and read it and made sure that it was correct and corrected a few things or added a few

things that I wanted this package to do. And then I passed it to another LLM and said, "Please find

me the weaknesses in this specification and ask questions about those weaknesses." And I took the

response of that and also then added some answers to those questions and then passed it back to the

first LLM and said, "Please integrate these changes into the spec." And ended up, after about an hour's

worth of work with what was quite a detailed specification for a Swift package to pass

a markup language called Critic Markup, which is an extension to Markdown where you can

mark additions and insertions and substitutions in written text. So, you can do like a little

brace and then plus plus and then a word and then plus plus and then the closing brace. And it knows

that you want to insert that word into the Markdown document. Think of it a little bit like

track changes in pages, but in a Markdown format. And so, I had it write this spec and then I gave

that spec to Cloud Code. I just said, "Please implement this, including a full test suite,

including DocC documentation, including Markdown files that include examples of DocC documentation,

as well as just the documentation for the structures and classes themselves." And it went

away and thought about it for probably 45 minutes, something like that, and spat out a package.

And if you look at... So, I was actually watching it do its work for most of the time.

And you could see that it was not only building and running the test suite, it was then kind of

refactoring the code it had written for better testability and then writing a test then to test

it and then running the test, make sure they pass, writing documentation. And after 45 minutes,

I had a package that was working, just kind of remarkable that it needed literally no intervention

from me at all. Now, I would want to test that package further. In fact, I hesitated before

adding it to the index because I haven't used that package in Angular, but I have used it in

a little demo application. And of course, the tests are there and the tests do what they say

they will do because there's tests. So, it was very interesting to do that. And so, the reason

I mentioned that is that what has shipped in Xcode 26 is a step towards that, but already,

and this stuff is moving so fast at the moment, that already what was released yesterday is a

little bit behind state of the art because that kind of stuff was a few months ago. And we are

talking like literally just a few months ago, but it makes a real difference for the LLM to be able

to interact with your tools in terms of, I think it's time to run the test now. So, I'm going to

run the tests and I'm going to look at the output from those tests and see if they pass. And then

if they don't pass, I'm going to look at the error message that it gave me and then rewrite the code.

And so, it's a real huge step along that path, which, yeah, I thought it was a very... I wanted

to experiment with it and I felt that was a decent experiment that I came up with.

Yeah. No, it's remarkable the capabilities they've gained. I mean, we've touched on this a

couple of times. We have slightly different points of view on the whole AI stuff, although I think a

lot of it is also aligned. I think it's... Initially, my main complaint was that it doesn't

actually work or didn't seem to work, but I had to revise that a bit. I tried independently. I

tried actually something just before Peter Steinberger came out with his blog posts and

the video about trying this. I had just tried Z, the Z editor has integrated...

Yes.

...Claude thing and I used it to write a script. When we ran the Android and WASM compatibility,

we did a comparison against the skip tools page where they list... They had run a compatibility

test themselves as well. And I wrote a little script to compare their list of results versus

our list of results. And you can imagine writing a script like that is not a big task. And it took me

a little less than 20 minutes to do it and compare the results and verify that it is okay.

And even before I started, I thought maybe that's something I should try, compare how I do versus

how Z with Claude does. And obviously, if I was to do that, I had to do it myself first because

what's the point watching the thing and then it's then harder to do it yourself when you have a

finished thing. But interestingly, once I'd finished writing the script on the same day,

I pulled up Z and then I was looking at the prospect of describing to the thing what I wanted

to do after I'd just done it. And I couldn't even find the energy to start writing a prompt

to tell it what I wanted to do because I had it, I had just done it. So I said, "No, I'm not going

to do this. This is silly." But then the next day I was sort of fresh and I thought, "All right,

let's let's let you know, fresh new day, let's go and do it again." And I honestly, I'll admit,

I didn't think this would actually work at all because from everything I'd seen, these sort of

wander off and do stuff. And like a Swift script, this wasn't even a package. This was a Swift

script. I think I might've turned it into a package. I don't remember. Anyway, it wasn't

a trivial thing. I mean, it wasn't a super complex thing, not anything like what you've done or what

Peter has done. But still, I thought this would be too complicated for this to understand what I want

because there were a couple of things that were kind of tricky. Like the two input formats were

adjacent, but they were different. They weren't structurally the same. Not even, you know, one

was an array, the other was an object with leaf items and stuff like that. So it was more than a

absolutely trivial task. And I didn't think it would manage. And it did manage. And it managed

in a form that was acceptable. I had no complaints. I ran a couple of additional

iterations, told it, "Look, we should refactor this a bit and this a bit."

I was done after 10 minutes. So I was faster with the thing. I think the only thing

where I benefited from having done it before myself was that I caught a couple of edge cases

while I was doing it. And that's the problem I have with these tools. I wonder how much of an

edge you lose thinking about your problem deeply when you're not actually doing it. I noticed when

I ran the results after I manually compared them that the numbers seemed low. And I realized that

the URLs that were compared were actually, they had different casing. And that's a rather trivial

thing to realize and know, but also because we've had this situation in the past. So all of our

comparisons with URLs are always lowercase and then compared for that reason. And so I spotted

that. But I also, I think I spotted that faster because I had actually just written the map and

the comparison. And you are in the vicinity of the issue when you're in that mindset.

I think if I just sit down and, "Oh, I want this compared." And then I look, "Oh, right."

You're more tempted to accept what you get. And I had the same effect with, it's sort of like the

the complaint dialogues. And you start clicking on accept. Like also when it wants to confirm,

"Can I run this tool?" Initially I had it always to confirm. And then at some point you give in

because you have this, your resistance gets worn down. "Why do I always need to accept?" "Yes,

you're just going to run Swift Build again." "Okay." So I click accept. And then at some

point I said, "Well, just run it. Just God damn it. Just leave me alone and run it."

And that's the thing that I fear most with the actual use of the tool is that my vigilance and my

awareness of what I'm actually doing is going to, it's like taking over a self-driving car.

You're never going to be able to take over when actually it gets critical, right? Because you're

not in the system. You have no state of the operation. Like what state is the system in?

Then you're supposed to jump in and take over. And that's what I find so hard. And I feel like

I really fear the losing capabilities and losing ownership and control over the system as a whole.

And I do realize that we've had this before. Every tool sort of removes you a bit from control.

People used to hand roll assembly, right? I have no idea how that actually works.

That's why I was interested in actually doing it. I wanted to do that experiment because I felt like

I can't just sit there and complain that these things don't work if I've actually never tried

it in earnest. So I wanted to understand because I don't think people like Peter

are making this up, right? And he's not the only one. There's others who've been using this. And I

think there's a scale. There's people who just blindly quote unquote vibe code and just have no

sense of what's going on. And there's people like Peter who are, he's an expert, right? And he's,

if you give me a jackhammer, all I'm going to do is ruin whatever I'm doing, right? It's not the

right tool for me, but there's people like Peter, you can hand a tool like that and he'll use it

responsibly and get amazing results out of it. So I think there's this aspect as well.

- It's definitely, there's definitely a skill to using these tools. That's a hundred percent

correct. And I've been using it for the kind of script that you were writing to compare those

results that you talked about. I've been using it for that kind of task for many months now. And I

was a hundred percent convinced that it was extremely good at writing that kind of code.

This is the first time that I've ever told it to write something from start to finish. And I think

your point about the edge cases and the things that you discovered while you were

writing your own version before you asked the AI to write it. I definitely found that with this

project through the specification phase, like the very first version of the specification,

when I read through it, it didn't make sense because I hadn't thought, and I'll be specific,

otherwise it's going to be very difficult to talk about. Critic markup has the ability to nest

critic markup inside critic markup. And actually when I told it to write the spec the first time,

I didn't realize that was even a thing, but it figured out that that was a thing in critic

markup. And then it just started making up how that should work, right? And because I hadn't

told it otherwise, but I hadn't, well, I didn't know that it was able to be nested. And so I

hadn't thought about that at all. And so in the revision to the specification, I made some rules

about what should happen with the nested markup. And actually what I said is that it shouldn't

support nested markup and it should just, anything inside a markup piece should just be treated as

the output. So that if there was nested markup, then it would just verbatim echo that as the

original text. But that thinking through of that specification process was really, really important.

And I think that's why it took me longer to write the specification than to write the code.

Yeah. Yeah. Yeah. I mean, it's certainly a shift in skillset. I'm not sure if I'm,

I'd be happier writing specs than writing code. That's the other thing. I mean, that's probably,

that really depends on where you land in, in what parts of the system you like. Like I love writing

tests for instance, and that's, I'm probably in the minority. So we all have different aspects

of a system that we like. Right. And I also find that's why I'm so a bit worried about offloading

these parts to the LLM is I often write a test and then I think of adjacent tests as I'm writing

that test. And I feel like I fear that if I don't write tests, I lose that branching out and

discovering edge cases that way. It might still happen in other ways. I'm just worried about that.

So one of my biggest complaints about this whole AI, and I resist to call it that still,

the LLMs is the aspect that I didn't trust them to work well enough. I think there's still areas

where they're too blindly deployed and taken for truth when the results, especially when the

results aren't verified and the person, the operator isn't actually able to verify them

and then still they accept the results. I think that's a huge problem. And then the other big

problem is the provenance. Where is all this knowledge coming from? And we've discussed this

in the past. And we see the effects actually on this with package index. We're getting hammered by

AI crawlers since end of last year and they come in waves and they're just slipping up all the

data they can find. And it's not theirs. It's not their data. I talked to my partner over the weekend

and I came up with this description. It's like people are using steroids and everyone else is

getting acne, right? Because they are using all the data's out there and boosting their own systems.

But everyone else is paying the price. If I actually insisted never to use an AI system,

LLM system to help my work, I'd still be affected. And not just the down the line effects because

people use LLMs to create fake news and whatnot. We see all these effects already like society-wide,

but actually tangible effects where our system is getting hammered by AI crawlers and our systems

are running hotter. We probably have to pay more for plans that are traffic related at some point,

if not already, because of that. So we are bearing a cost and it's not the companies that are

actually benefiting from the sales of these AI subscriptions eventually that are actually paying

for it. And that's not even talking about the intellectual property that's getting copied,

like the copyright violations that are happening. And I'm just wondering how that's going to play

out. I think the IP issue is much more of a problem outside of code actually, because

for example, in art and writing and all of that stuff, because the majority, if not all of the

code that's being slurped up is open source. So there is very little IP. Although I had a big

issue with this when they first came out, which was they were clearly trained on all open source

code regardless of open source license. And so that includes a whole load of GPL code. And

at what point does that GPL code not get covered by the GPL anymore?

Yeah.

It doesn't. They are way past the point of verbatim spitting out bits of code like they

did at the very beginning, which is how we know it is chained on GPL code because it would spit

out GPL licenses occasionally. But we've moved way further down the road and they're no longer

working in that way at all. But there's definitely a huge IP issue with books and with art and with

all sorts of other data that is harder to talk about with code because so much of it is open

source. Whether that difference in the licenses is important or not, which I still believe it is,

but I think it's harder to talk about that with code than it is with the other ones.

Yeah. I mean, I get the point that legally it might be okay. I mean, but we've already had,

we talked about this in the past quite a bit. Open source is being a pillar of software

development and it's not really compensated for the value it generates. And there's yet another

facet to it where it is yet propping up another pillar of the industry and is not getting

compensated. Instead of actually turning back and helping out with one aspect of it, there's yet

another front being opened where the value is siphoned off and is powering big tech companies.

It's just not all right. It's really not okay. And the problem is all the mechanisms to control

it are much, much slower than the development. There's no way any legal system or any regulatory

body is going to be able to act fast enough before everything has happened already. The

development is so fast. The EU doesn't even have the first meeting about legislation by the time

this is irrelevant. Like you say, it's irrelevant now because they've moved on to probably not GPL

covered code, even if they have, I mean, they might not have, who cares? No one cares at this

point. I think that code is still there. It just no longer verbatim spits it back.

Yeah. They've just suppressed the signal or the means of discovering it.

I'm not sure it's even suppressing. Anyway, what we should actually do is because we're already

40 minutes into recording the podcast and we haven't recommended a single package yet. So I

think we should probably put this on hold because this is a long discussion that we will never find

the end of, especially because we do have disagreements about some aspects of this.

We're aligned on some of it, but we do also have disagreements. So I'm not sure we'll ever get to

the end of this conversation. So let's do a reduced brief package recommendation section

so that the podcast isn't an hour long. So let's just do one package each this week.

That sounds good. Let's do that.

I will kick us off. Yeah. My only package this week is a package called package-swift-lsp,

which is by Kattouf, Vasiliy Kattouf. And it is a language server protocol implementation for

Swift package managers, package.swiftmanifest files. So if you're not familiar with LSP,

I'm not familiar with the internals of it, but I know how to use them. They are a way that

code editors and other tools can interface with this LSP to find out what characters or text is

available in the syntax of whatever language it is, in this case, the language of a package manifest

and suggest the characters that may come next. And it's used in VS code. It's used in the Zed

editor that you mentioned earlier. And I believe it's also used in Xcode for code completion as

well. And the package Swift LSP is an LSP that has been integrated now into both Zed and VS code,

but unfortunately not Xcode yet because there's no way to integrate third party LSPs, I don't think,

into Xcode. That will give you code completion on the syntax of a package manifest,

including package completion. So you can add a dependency for a package, start typing the URL,

and it will suggest a valid package based on the characters you've typed. And that data is actually

coming from the Swift package index list of packages file. So they actually give us a little

thank you in the acknowledgements of the readme file. They acknowledge both Matt Massicotti's

language service protocol, which was the foundation for their project, and then

the Swift package index for providing a great package data for URL completion. And so if you

are writing a package manifest with Zed or VS code and have the extension that they have provided,

you get fantastic completion within package Swift file, better than Xcode.

>> I think that's really remarkable, that these editors in a couple of cases are eclipsing Xcode

because they're so extensible, right? That's really fantastic.

>> And no AI at all involved in that one. >> It was actually my pick, first on my list

of picks. So there we go. >> I'm sure luck to you.

>> Yes, yeah. And that's actually via this package while trying it out. That's how I

ended up using Zed again. I had looked at it in the past. That's how I ended up actually in that

whole LLM experiment writing that script. Funny, funny little sequence of events.

>> I've been using Zed recently for the HTML and CSS work on package index, actually, because

you definitely don't want to do HTML and CSS work in Xcode. It's not what it's designed for.

But I used to use, well, I still do use Visual Studio Code for that, but I've been giving Zed

a try and it's very good. It's an editor written in Rust, actually, and it's extremely good.

So what package are you going to pick? >> So in my pick, I'm going to talk about LLMs.

My second pick is a package called Probing by Kamil Strzelecki. And this is a really interesting

package. It essentially gives you a debugger-like execution control via checkpoints in your code.

So what you can do is you can sort of like encode, write probe points that you can advance to.

Like imagine setting a breakpoint, except you spell it out, like stop here. So you have named

scopes. And then in a test, you can actually advance to that point as if you hit continue,

and then you hit the breakpoint. And then you can run asserts on the state.

And I found that really interesting, that that's possible and the mechanics of it.

I haven't dug super deep into it. I've tried the example that it ships with. So there's an example

that fully demonstrates the capabilities. It's a little test package with an application that

has probes put into it, and then tests that exploit those probes to advance to this position,

and then run asserts on the state. But there's one area where I can immediately see this being

super useful. And we've had this actually a couple of times where something failed,

and it only failed in CI. And especially at the time, our CI ran really slow. So it took like 15

minutes. And you really just wish you were able to attach a debugger to a test and see what's

going on. Because you might know it's failing in a method, or a test fails, and that test calls a

method. And that method is quite large. It might call other methods, and you can't pinpoint where

it is. So your only chance is really, if you can't reproduce it locally, your only chance would be

to actually try and refactor the big method into smaller ones that you then test individually. And

you can imagine how much work that would be. But with this, you can actually put probes in. You

don't need to refactor, you just put in probes. And then in your test, you can run to the probes,

and then you can run asserts on the internal state at that point. And then you can break

down the big method into smaller bits and pinpoint the problem much easier. Or actually,

just much easier, it would actually give you a chance to do it without a local reproducer or

this big refactoring. So I think that's a super interesting package for that sort of thing.

So if you have that situation, or are generally interested in that,

give that a look. It looks really, really interesting.

Well, we're slightly shorter than normal package section. We should wrap it up there, because

we have now been going 48 minutes. So we should wrap it up. I think we'll have more to say on

WWDC and the changes coming in Swift 6.2 and any other stuff that gets announced this week that we

haven't yet watched the videos for in our next episode. But we should leave it here for today,

I think. Yeah, let's stop it there. Thanks, everyone. And see you next week. Next time,

rather. Next time, yes. In three weeks, something like that. All right. Take care.

Bye. Bye-bye.

Bye-bye.

Creators and Guests

Dave Verwer
Host
Dave Verwer
Independent iOS developer, technical writer, author of @iOSDevWeekly, and creator of @SwiftPackages. He/him.
Sven A. Schmidt
Host
Sven A. Schmidt
Physicist & techie. CERN alumnus. Co-creator @SwiftPackages. Hummingbird app: https://t.co/2S9Y4ln53I
58: People used to hand code assembly
Broadcast by