25: ChatGPT API exploration, SPI Playgrounds, Manifest validation, and package recommendations

One of the things that I've been using more and more actually in just day-to-day

life is a whole load of AI tools. You know this Sven of course, but the

people who are listening do not know this. The previous episode of our podcast

had some AI generated speech in it, which is something we haven't done before.

Everything that I said wasn't it?

- Yes, Sven, well, you can say whatever I want you to say now, Sven.

So it probably wasn't the bits that you thought were AI generated.

We had a difficult edit with the previous podcast that we put out, and there was lots

of stuttering and chopping and changing between bits of audio that you may have heard.

But what you probably didn't hear is the couple of bits of my audio that I replaced with AI

generated audio.

And that's a kind of, obviously, everything in the news is about AI these days.

And what I've been doing is playing with various different bits of it, both the chat GPT and

the kind of large language model text generation tools, and some of the image generation tools.

I've been playing with those a little bit over the last few months.

And what I've been doing is I've been keeping a list of everything practical that I've done

with one of these tools.

And that list is growing quite steadily now.

Obviously when you first get hold of these things, you ask them questions and you start

playing with them and you start trying to break them.

Certainly I tried to break it many times.

But what I've actually been doing is finding the things that it's really good at and that

are actually saving me time.

And before we get into talking about it in the context of Swift package index, there's

one which I have found it really good for, which is critiquing writing.

I don't think chat dbt is very good at actually creating writing.

It's certainly better than any other mechanized or computer generated text that we've seen,

but actually compared to human written text, I don't think it's good enough yet.

certainly as somebody who cares about the words, I'm not happy to use any of the

writing that it's produced for me yet.

But I have found something that's really good at.

So for the last three or four weeks, I've put my comment at the beginning of my

iOS Dev Weekly newsletter into ChatGPT and asked it to critique my writing.

And it comes up with genuinely quite interesting and useful suggestions.

And I don't take them all, but I also wouldn't take them all if I asked a human to critique my writing.

But every week it's come up with three or four that I've ended up taking and rewording in my comment,

and it's making, I think, those things better.

So how does that work, actually? Do you paste in like a paragraph and it gives you a variance of that paragraph?

Or when you say "critiquing", how does it actually do the critiquing?

No, it's much better than that.

The prompt I've been using is very simple.

The prompt is critique this writing.

That's it, three words.

And then you enter that and it says, please enter the writing

you'd like me to critique.

And then you paste in whatever the text is.

And I can give you some examples.

So last week in iOS Dev Weekly, I talked about the AR, VR, MR headset and how I

feel about those kinds of rumors and whether it makes sense for Apple and that

- Yeah. - And let me give you

a couple of things that it suggested.

So it said, for example, in the second paragraph,

you might want to restructure the first sentence

to avoid repetition, and then gave me the bit

of repetition that it was talking about.

- And that was correct.

There was actual repetition in the sentence.

- There was repetition, yeah.

And I fixed it, yeah.

And it's, again, I know you, and I do as well,

you have fears about its accuracy,

and what do they call it?

they call it hallucination, I would call it making things up.

And with this kind of use, that just isn't a problem.

Because if the repetition wasn't there,

then I just wouldn't correct that point.

There's nothing that can really go wrong.

- Yeah, I have a much better feeling

about all these AI tools when they operate on your input.

Because it's immediately obvious.

Because you're familiar with it, right?

you can immediately cross-check.

It still helps you, you know,

that you don't have the burden to,

you're not asking an obscure sort of thing

about something you don't know,

and then you get an answer

and you don't know what to do with the answer

because you actually need to do the same amount of work,

double-checking what you just got.

Whereas when you input something that you have created

and you have it augmented,

this is much easier to accept the result

and immediately verify the result, I find.

- Yeah, absolutely.

Another piece of criticism it gave me was,

it said in the fifth paragraph,

you might want to provide a quick reminder

of what, in quotes, this announcement in 2017

is referring to.

And actually, it made a great point,

but what it didn't know is that those words

were a hyperlink.

So, obviously, that provides the context.

But the fact that it figured that out,

and then you'll like this one,

the final point was, lastly,

I love how you ended the piece with humor.

It gives the reader a reason to smile and keeps the tone lighthearted.

So have you given it a name yet, now that you have such a nice relationship with your editor?

It is basically doing the job of an editor.

Yeah.

I could not justify an editor for iOS Dev Weekly.

So in that context, this is a great second place.

And I've been really getting value out of using this tool.

That's really nice.

You might ask, why are we talking about iOS Dev Weekly and ChatGPT on the Swift

Package Index podcast, but there is actually a reason.

I just posted a couple of threads in our discussion forum that write up some of

the stuff that we've been talking around.

In fact, I think we, we teased it a little bit, a couple of podcasts ago,

saying that we'd been thinking of a couple of potential uses of this kind

of technology in the package index.

And I spent a little bit of time today

writing up what we've been thinking so far,

and actually something that we haven't even yet discussed,

but is something that's been on my mind.

So the first one is summarization.

These large language models are pretty good

at taking a piece of text and distilling the information

into something usable.

And one of the problems we have with the package index

is that we have various bits of metadata that describe what the package does, but they're

all either too short or too long.

So we have obviously package name, which is really useful, but generally one or maybe

two words.

We have an optional description that comes actually from GitHub's repository description,

which some people do use, but also some people don't use.

And also it tends to only be maybe five or six words, something like that, to describe

the repository.

And then the big one is the readme.

And the readme really should contain a description of what the package does.

But the readme also contains a whole load of other things.

Like for example, people generally put installation instructions in there.

They put maybe some code samples and code examples in there.

Maybe they put a license agreement in there.

lots of stuff that we don't necessarily need to determine the purpose of the package.

And so what I did was have a little play with the chat gbt API and try to write a

prompt to generate a summary that specifically determined the purpose of the package.

And I gave it a whole load of readme files and unedited.

So I didn't exclude any text out of the readme files.

The only thing I did do, and I should mention this in the thread actually,

is some readme files were too long for GPT.

So it has a limit of the number of tokens it can accept, and a token

is about three quarters of a word on average.

So in those cases, I just flipped it to the maximum number of tokens and threw away.

I didn't do any kind of human editing on it.

Yeah, just hoping that the information is all front-loaded in the README.

Which it should be, because the purpose should be the first thing.

If it isn't, you've got a problem.

And there are definitely lots of problems in READMEs and the package index.

So, I clipped anything that was too long,

and I passed everything through the same prompt

and just let it spit out what it thought.

And through tweaking that prompt,

I managed to get it into a fairly good state where it was generating decent summaries.

And if you look at the discussion that's on our SPI Discussions Board, which is on our

main repository, you can see I've included some of the generated summaries.

Yeah, I looked at those and they look really great.

The thing I haven't actually done is double check against the packages, what I think the

description should be, but it looks so reasonable.

And there's a couple of packages that I actually know and it's spot on for those like semantic

version actually I think that pretty much lifts the one sentence we have in the description.

Argument parser is very clearly spot on.

Alarm of fire is the other example.

I thought these were really good and they are short like three lines like a paragraph

at most.

I think the results are really promising from those examples.

I wonder if there are edge cases that sort of make it break and we just haven't found them yet.

But these examples, like the six or seven of them, they look really good.

Yeah, I'm certain there are.

And I'm certain that also, because GPT is not very repeatable,

like you can ask it the same question lots of times,

and it will give you lots of different results.

And I tried to engineer the prompt a little bit to remove some of the things that it was doing.

For example, as soon as I told it to determine the purpose,

it started every paragraph with the purpose of blah, blah, blah,

which of course you would expect it to do,

because this is exactly what you've asked it to do.

But I had to pop in the prompt, do not begin the paragraph

with the purpose of.

[LAUGHTER]

And then what it started to do, not always, but sometimes,

it would put summary colon and then the summary.

So I added that.

And I'm sure there are more and more of those things that we'd find over time

that we would need to...

Like that prompt I don't think is complete.

But those summaries that are generated are completely unedited.

I haven't touched them.

They look really good.

Do you know what happens when there is no README

and all that has to go by is like very minimal package name information?

Does it actually tell you I can't do anything

or does it then conjure up something out of thin air?

- That's a great question.

If we were live recording a podcast,

I would test that right now,

but that's a great question, I don't know.

But actually, this does bring me on to another point,

which is really serious.

We didn't write these packages,

and for us to ask an AI to generate a summary

of somebody else's package,

that is a little bit of a stretch.

I don't think it's a terrible stretch,

And I don't think anybody will, will get terribly upset, but it is a little bit of

a stretch into something like we're representing somebody else's package in

a way that they didn't necessarily approve.

And so what we should do, and I've written this up in the thread as well, is what we

should absolutely do is put a key in the SBI YAML file, where if you put your own

description in there, that is approximately the same length as what we're trying to

generate here, we will use that over any AI generated.

So any package author that just wants to take complete control of that,

that will absolutely be part of this feature if we implement it.

Should we consider making it opt-in in general?

So you opt-in either having it AI generated or just you know opt-in by having your own paragraph.

That might also take care of costs.

I've seen your estimate how much it would cost to run it once across all packages.

that will also then be an easier ramp up of the whole feature if it was initially just opt-in

based because it'll be much more the number of packages that would actually be affected

and it would take away that problem of people potentially not being happy about that.

So there's an argument for that for sure and I think that the answer to that depends on how

how we're going to use it.

I can see two main uses for this kind of data.

I can see, first of all, we could use this data in the UI.

We could make much nicer and much more usable

search result pages and category lists

and author lists and that kind of stuff

if we could actually have a couple of sentences

of description about each package.

Because at the moment, what we're relying on

is the description if there is one

and the package name, and that's it.

So I think if we use it in the UI,

it would be,

- If it's all of them. - It would really

only be worth doing that if we have them for everything.

Yeah, exactly.

There is another use though,

which is not potentially in the UI,

and we could do both, or we could do neither,

and we could do one or the other.

We could use AI-generated summaries

as an additional indexable search field

without actually displaying it in the UI at all.

So I've just done a little bit of testing today with this,

and this might be the way to solve our word stemming problem.

So semantic version, all one word camel cased.

In every case, it split that up into semantic space version.

And that could help solve some of our search issues.

- Yeah, interesting, yeah.

I honestly, I think we should do it for everything.

I don't think, unless as we get further down this road,

the mistakes that it makes are egregious.

I think we should just do it for everybody.

And then if somebody doesn't like it,

we've got an immediate way to say to them,

look, this is how you opt out.

- Yeah, I wonder what the,

what edge cases are that would be harmful.

Is there even a scenario we can think of

it could dream up something really weird.

That isn't easily verifiable.

The question is, what's the effect of that?

Would you make a package decision

that is bad based of that?

I mean, you could argue that you shouldn't just pick

based off of one paragraph.

- There's also the issue of disclosure.

Should we, do we, if we do this,

should we disclose that these bits of text

are AI generated?

Yeah, I feel like we should.

The question is how would it be a label or something,

but I think it's really important.

Yeah, I just have this sense that I would want to know,

did actually someone look at this

or was this machine generated?

It has a different feel to it, I think.

And I think I'd appreciate knowing where this comes from.

- But the question of whether to put that on the page

every time it's used or whether it's just enough

to have that in documentation somewhere.

That is a question that I don't think we can answer

in two minutes here on the podcast.

I think we actually need to look at how this,

look at some real actual data.

- Yeah, and people should let us know.

I mean, you can stop us right here if you come screaming.

- Yeah, this is not even at an issue level yet.

You know how you create GitHub issues

when you want to do something.

We have not yet created a GitHub issue for this.

This is purely a discussion at the moment and nothing is set in stone.

We may not do any of this.

Then I have been thinking about another potential use of GPT for package index.

And that is recently OpenAI announced that they were running a beta for OpenAI

chat GPT plugins, which are, as I understand it, they are a way that you can provide in a

structured way, information to the GPT language model, which it will then combine with its

existing language model. And you can effectively expand its knowledge with your data. Quite

Famously, it's not connected to the live internet and it has a knowledge cutoff

date of September 2021.

But this is a way that you can say, here is very specific bits of data that are

more recent than that, that you should then know about.

You can feed it structured data.

Yeah.

So I think this is interesting with package index for again, a couple of

different reasons. If we started feeding it package information including, for

example, readme files and some of the metadata that we produce. Yes, absolutely.

We could even include things like compatibility information. Yeah.

Categories, that kind of thing. There's a lot of stuff that we can, in a very

structured way, tell GPT about. And I think that would unlock two potential

uses. The first is the people who are just using GPT would then be able to

talk about Swift packages with GPT and they'd be able to discuss what kind of

package they were looking for and without the package index even getting

involved. Oh so this would enter the general domain this isn't just you're

not seeding your own domain that you then query this this is available to

anyone? As I understand it but I'm not sure about that. I'm not a hundred

so sure about that. Okay. If that is the case that's not necessarily game over

for it but I would like it to be able to give links to people back to package

index. Yeah. Because it doesn't really give you links at the moment it's very

reluctant to actually link to anything. So again that's this is even further

away from being a real idea this is really just like let's have a quick look

at this and see if it's even worth investigating.

But then the second use of this technology

would be to potentially have GPT-powered search

in the package index.

So instead of just running through Postgres' full text

search, you're actually asking GPT questions

and using the package information

that we've submitted to GPT through the plugin.

Yeah, I can totally see how that changes a lot, because right now we have the problem

that we both need to implement any search extensions ourselves, you know, like platform

colon Linux to filter on packages that are compatible with Linux.

Yes, we both need to do that and then teach people the search language.

Whereas if you pipe it in and this works, which is might be a big if, but if that works,

You cut that out on both ends, right?

People can just use natural language

in asking the questions as they would think they should.

And the machine sort of sorts out the mapping

to the results without us having to go in

and extend product types with macros when they get shipped

because it'll just appear automatically.

- And does that then lead into some kind of,

'cause again, another really powerful thing

about these tools is that they have context.

So you can refine search results or refine what you're talking about them with.

I think it's interesting enough that I would like to have a quick, well, I've

applied for us to be in the beta first of all, which is the first step.

And then depending on how onerous it is to get the data in there, it might be

worth us spending some time just having a little experiment with this.

I think the other really big problem with using it for site search on the package

index is potentially the cost because you'd be paying for API calls per search, which

would be expensive.

Yeah, there's actually a use case that I just went through the other day where I wonder

if that would yield better results. And I was looking for a package that would allow

you to create images on Linux. Think of SwiftUI rendering out into a PNG, which you can do.

On Linux, obviously you don't have SwiftUI, you don't have core graphics, that sort of

stuff. And it was really hard to come up with a search that would even give me any results.

You either get no results because you've been too specific or you get like dozens of results

that are really hard to whittle down.

And I could imagine that GPT's deep inspection

and finding all these bits of information

that we can't surface right now in our search, right?

Because if you look for image, you get flooded.

If you look for JPEG, PNG, there's just too much.

And there's no, it's really hard to come up with.

The platform label takes out a lot of that,

but it's really hard to distill that into a search query

as it stands right now, where I would hope that would actually be possible with a GPT interface.

And I wonder if that's actually correct. Maybe that's a good use case to try it out with once we

embark on it and explore this further.

I've now had hundreds of conversations with GPT about all sorts of stuff. And I can certainly

tell you that kind of job of refining

what you're talking about it is

extremely good at that. Really interesting.

I'm not gonna say this is

revolutionary but it's this is a really

significant development in in computing.

Yeah I mean that's certainly that is

certainly clear. Exciting times. I feel

like I did a lot of talking there I'm

sorry. Let me maybe take over for a bit

bit and talk about some other news that we have since the last time we recorded.

Which has actually been a while, hasn't it? So we had a bit of a longer time.

And there are two things worth mentioning. The first is that we have a

new version of SPI Playgrounds, release 1.1.0. Came out, I think it's a

a week or two ago now.

This version is Mac OS 13 only,

so Ventura or higher,

which is for uninteresting reasons.

There were some API changes

and we couldn't figure out how to fix that any other way.

The old version is still available for download.

So if you are on an older Mac OS version,

you can still use that.

The only thing you don't get then

is the couple of small fixes that the new version brings

and that is support for Mac Catalyst and the platform.

And do note when you need to make use of that,

you need to also choose iOS as the build platform

in the inspector on the side.

And thanks to GitHub user Ting for the tip,

how to get that to work.

Initially, Mac Catalyst actually made

the whole playground creation fail

because we didn't recognize that platform type.

But then it also became possible to actually run them

in the playground when you change that setting.

And the default setting is macOS.

That's why you actually need to go in and change that.

The other issue that fixes is around packages with plugin definitions.

We also failed to pass these.

And finally, there was a deprecation warning in the generated playgrounds,

which is really just a little nuisance that you got the alert sign

when you were working with a newly generated playground.

Other than that, it had no effect really.

Bit of a technical background,

The app is also now based on the latest version of the composable architecture by Poet Free Co.

And I really wanted to give Steven and Brandon yet another shout out for maintaining

not just this library, but the host of really great packages that they maintain.

Because they somehow manage to not just maintain them and have really advanced features in them.

They do it in a way that is backwards compatible, like in so many cases.

and when it isn't, they have very detailed release notes that tell you exactly

how to address any warnings and build errors you might be getting when you change to newer versions.

So I hadn't touched Playgrounds in quite a while

and the Claws of Will architecture had moved to much newer versions,

so they had big changes under the hood.

We got loads of warnings making that first change

and it was really easy to sort out despite these warnings looking quite scary.

And they're just doing a fantastic job with all the libraries,

moving them through the versions.

And bear in mind, these are 0.x library versions,

so TCA, the composable architecture is just on the verge of being 1.0

and they already have this fantastic backwards compatibility in place.

Big shout out to them.

The second piece of news is that we have Swift package index manifest validation available on the website now.

What that means is you are probably familiar with,

like CI services often have YAML files to control their operations and they also often offer

a web form where you can paste in your version and have it validated and we have that available now on the website.

I don't actually have the URL to hand right now, we'll have it in the show notes.

You can go to that URL, paste in your spi.yaml file,

and it'll be validated. It'll tell you whether it's correct,

and if it isn't, it'll give you a hopefully useful printout.

So we're actually using the exact same codable struct

in that tool that we use on the back end to read the files.

So what you get out when it fails is effectively a quotable error,

which sometimes perhaps aren't the most useful,

but in the cases that I've tried, it was actually quite helpful in pointing out potential errors.

I also forget where the exact URL for this is, but I can tell people how they can find it.

So if you are the maintainer of a package on the package index,

If you go to your own package page, then on the right hand side of the bottom of the sidebar,

there's a little section that says, "Do you maintain this package?" And if you click the

"Learn more" there, then we have a whole load of information with different bits of customization

and little badges and things that you can add to your README file. And at the very bottom of that

page is now a online manifest validation helper link. Yeah, that maintainer page has lots of

hopefully helpful information for maintainers to spice up their package, badges, documentation

links, all that stuff. Do you want to kick us off with package recommendations this week?

All right. I have perhaps a package that you've also picked, and that is Swift Foundation,

the Foundation Preview Package by Apple and Swift Foundation ICU that comes in combination with them.

What that is, is the preview package of the Foundation library that has been announced,

I believe, at the Server-Side Swift conference in December last year.

I was in the audience for it.

You were indeed, yes.

So this is the re-implementation of Foundation, open-source re-implementation.

It's starting out with just a number of types for now.

So there's an announcement blog post, which we will link to.

This includes all new Swift implementations of, for example, JSON encoder, decoder, calendar, time zone, locale.

There are also other types that are newly available as open source,

which probably means that they aren't all Swift re-implementations.

I suppose in some cases it's just very intricate and battle-tested code that is not being re-implemented,

but just moved into the open source realm.

There's also Foundation ICU, I guess that's Foundation Intensive Care Unit,

which is dealing with internationalization on non-Darwin platforms. So that's obviously Linux,

Windows, and so on. This is based on Apple's upstream ICU implementations. So this is,

I think this is Unicode and all that sort of stuff. The whole package is Swift 5.9 for now.

It's interestingly the tools version in the package is 5.8 only, which actually allowed us to add it to the index.

It's failing all the builds because it actually has some tests to make sure it's only compiling on 5.9.

I'm curious why that isn't telegraphed via the tools version, but, you know, it at least allowed us to add it to the package index.

There's a promise of performance increases. Not just a promise, there have been some results posted in the blog post as well.

I guess this is mainly from normal Objective-C bridging in case of the Swift re-implementations.

So for instance in case of JSON decoding and encoding there's a promise of 200 to 500%

improved performance.

I believe those were the numbers.

I do believe on Linux we already profit this right now because like on server-side Swift

I think the JSON implementation was already not based on bridging, but I might be wrong.

It certainly will help iOS and Mac apps.

Finally worth mentioning is that along with this comes a new foundation workgroup to guide

its development.

And the goals for the first half of 2023 are quality and performance.

And a secondary goal is requests for community proposals to add new APIs, which is really

interesting because there have been a couple of areas in the past where people

were hoping to get stuff into foundation but it was always difficult because

it's just not a part of the project that was managed in the same way that Swift

was. So there you go. I think the other thing that's really important here is

that a lot of the open-source packages that Apple have been producing so far

have been from scratch and a lot of them, not all of them, but a lot of them are

around server-side Swift and topic areas that are quite different from Apple

platform development, iOS app development, Mac app development, that kind of thing.

And Foundation is not. Foundation obviously is applicable to all of that

server-side Swift stuff because it is the Foundation but it's also the

Foundation underneath all of the Apple platform development and I'm not sure

the implications of that have been really emphasized enough by people that

I've seen talking about this. This is much more than just another open source

package from Apple. This is at the core of both Apple platform development and

in fact all Swift development and I think, I mean I have no idea, I have no, I

have no idea whatsoever how the discussions around this, maybe it was

very easy to do, but I would imagine it was not.

Certainly great to have it, and that extends Apple's rich catalog of packages on the index.

The last thing I want to say on foundation is this highlights a feature of package index,

which I was very keen to do as we were building it way back three, almost three years ago.

I don't know whether you remember the conversations we had around this, Sven, but we had, we did

have a backwards or forwards around what should we put at the top of every package page. And

I felt quite strongly that it should be not the repository name, but the package name

that's from the manifest itself. And this is a great example of why I think that was

the right decision. Because if you look on the package index page, we display this package

as foundation preview because it is a preview, but actually, if you look at the GitHub repository,

you don't see that foundation preview anywhere because it's hidden within the package manifest.

And so I think looking at it in the index, to me, it's an example of why it's clearly

a better experience to look at a package through the index than it is to look at the GitHub

page for a package and things like our looking deep inside the metadata to pull

our package name specifically and of course all the other stuff that we do as

well. One thing we might want to consider at some point and I know we talked about

this as well just in the context of use this package is surfacing more of the

package manifest structure. What modules are there that you can actually import?

right, yeah. Xcode has some affordances for this when you type something that it

knows lives in a certain module it offers up importing the module or even

adding a package when it knows about the details. I would love if we actually had

this also on our use this package thing hover where we could offer more than we

do just now because all we do right now is offer up the clause that you use for

or the dependency, that little snippet with the URL

and the branch or whatever it is you want.

But that is missing the part where you do the product

that you import and obviously might be more than one

that you could choose from the package that you're using.

- All right, so you're correct.

I did have that package in my list,

but I also have some other packages too.

So the first one that I wanna talk about today

is DS Waveform Image by Dennis Schmidt.

He's no relation, is he, Sven?

That particular Schmidt, no, no.

So DS Waveform Image is both an image generation and also a view and switch to a control that

will take an audio waveform and represent it as an image.

So for example, if you were building some kind of audio recording functionality into

your app, then this would be a perfect example of the kind of thing that would be a lot of

effort to implement for yourself, but gives you that extra little bit of polish of not

just saying there is audio, there is not audio.

Actually, look, here's your audio.

If you look at the readme file for this package, you can see a great example of what it looks

like in the, in an application.

Yeah, that looks really nice.

Yeah.

That said, I think there's an argument against using this control in certain instances.

I'll give an example of where you might and you might not want to use a control like this.

Duolingo.

I've been using Duolingo for a couple of years now, and it has a couple of bits of UI where

it draws an audio waveform of the text you're about to hear.

And then you tap on the waveform and it plays the text back.

Now, if you're just using the application,

you will never notice this,

but the waveforms in Duolingo bear no resemblance

to the actual audio that it outputs.

Now, if they were just using the same waveform

over and over again, you'd notice that

within a couple of days, I would imagine,

you'd figure out that it's not a waveform view.

But the illusion of it being a waveform of the audio

is easy enough to convince someone of,

with just a repetition of three or four

predefined audio waveforms,

that they just cycle around basically.

And it doesn't, because it doesn't matter.

It doesn't matter, you're gonna tap it.

The only thing it needs to do is say,

if you tap this, you'll hear some audio.

And this is not a criticism of Duolingo.

I think their solution is actually perfect

for this use case.

The only thing I'm saying here is,

this control is going to be great when you genuinely need a representation of an audio

waveform. And when you're just trying to indicate there is audio here, you might be better served

with a simpler example of just a few predefined images. And I think a good example of where you

might want to use this is, let's say, if you were allowing people to record their voice,

because what you'd be giving the feedback there is

we captured the audio and you were not just silent.

This is actual your audio and this feedback that your audio was successfully captured.

And so that would be a good use for this kind of control,

but it's always worth looking at the other side of it.

Presumably when you actually run it as a capture, you'd see the levels as well

and that would translate then well into the actual waveform.

So you have this connect between what you're doing on one side and the representation persistence on the other hand.

Yeah, but that's not to take anything away from the actual package itself.

So that's DS Waveform Image by Dennis Schwett.

All right, my next package is a couple of packages and it is called Swift AST Explorer, Swift AST Explorer by Kishikawa Katsumi.

And this is an interesting package. Well, it is a package, but it's not really a package that you would use in your own application.

It's a web app, actually. It's a Vapor app, and it's a web service that is being deployed.

A website that you can visit, and we have the link in the show notes.

What it does is you paste in Swift source code, and it shows you the AST, the abstract syntax tree of the source code.

and it's using Swift syntax by Apple under the hood to actually do the parsing of the Swift source code into that syntax tree.

AST, that's actually funny because in German "ast" is "branch", "tree", so there's a nice loop there.

I didn't know that, yeah. I'm not using Jeralingo to learn German, unfortunately.

Did I resolve the acronym? Abstract syntax tree, that's what AST stands for.

This is interesting in itself, but it's perhaps a bit in the weeds.

Why would you actually need that or find that interesting?

And there's an upcoming really good reason for that.

And that is the macro expansion feature that is being discussed right now in Swift Evolution.

I'm not sure if it's been approved yet, but I think it's going to be very soon if it isn't already.

the feature as it is proposed and implemented will allow you to take syntax nodes

and expand them then by writing a little program that runs on that syntax node

and that is using Swift syntax under the hood.

So you can transform a syntax node that you've been given

and then create whatever macros do, you know, expand it or transform it, do all sorts of things.

And there are lots of examples out there already showing what you can actually do,

what you will be able to do with macros.

What this web app does,

gives you a view of the abstract syntax tree or the syntax nodes,

how Swift syntax sees them very easily,

because that view can be quite overwhelming,

trying to work with Swift syntax,

because that output is really verbose.

You're getting a lot out of like a tiny snippet of Swift

will generate a lot of bits and pieces,

because there's just a lot to the syntax.

once it's parsed into all its constituent elements.

And what this will allow you to do is see the tree,

what it looks like, and then make it easier to transform it.

Think of it a bit like Safari DevTools

has that element selection where you get this cursor

to hover over the page and you can select,

you know, little elements on the webpage,

divs and all sorts of things,

and it highlights them and you can see them highlighted

in the source code.

that's exactly the same thing you have on that web app page,

where on the right hand side you have the past result

and you can hover over it and it'll highlight

the corresponding Swift code on the left hand side.

So it's a really great tool to explore all of this.

And that's Swift AST Explorer by Kishikawa Katsumi.

- That's fantastic.

Talking of ASTs, I remember having a conversation

with Graham Lee at an NS conference event, probably it must be 12 or 13 years ago.

And we were chatting about the AST generated from Objective-C code, because

of course, at that time there was no, there was no Swift and the conversation

we had was around whether the AST should be the format that we actually commit

to source code control and that the editor view on your source code, including all the things like

code formatting, tabs or spaces, number and position of brackets and all that, all that

stuff that developers love to argue about. If you commit the AST instead of the text source code,

Everybody can view the code in different ways.

(laughs)

But I remember that conversation quite vividly.

Graham, if you're listening,

I wonder if you remember that conversation too.

Drop me a message.

- Imagine the conversations you could short circuit

by doing that, you know, all the--

- Exactly.

(laughs)

- But also imagine what merge diffs would look like.

So my last package is Emoji Text by David Walter.

And the best way for me to describe this package is

it allows you to put in custom emoji

into a SwiftUI view.

So what I thought this was first of all,

was a lot of tools will take colon clown

and get the clown emoji.

It's not that.

What it is, is if you want to display emoji

along with custom emoji that you have designed,

you can do that with this package.

And what it does is it allows you to present an image

that the renderer should use for your emoji

if it detects your emoji in text.

So you can use this colon colon syntax,

define some custom emojis in there,

have some images in your application,

and not only have the standard system emoji,

but also, like for example, tools like Discord

allow you to add your own emojis

that you can then use inside your application.

It would allow that kind of functionality inside your app.

And that's just something I've not seen

anybody tackle before, so I thought it was worth a mention.

- That is interesting.

How does it deal with sizing and, you know,

like aspect ratios and that sort of stuff?

Do you know?

- That's a great question.

We'll tune in next week.

The Raven is pretty good, but it doesn't cover that.

Well, it does look really interesting nonetheless.

So here we are at the end of another episode.

Thank you all for listening.

And we will be back in a couple of weeks.

Yeah.

See you in two weeks.

Bye bye.

All right.

Bye bye.

Creators and Guests

Dave Verwer
Host
Dave Verwer
Independent iOS developer, technical writer, author of @iOSDevWeekly, and creator of @SwiftPackages. He/him.
Sven A. Schmidt
Host
Sven A. Schmidt
Physicist & techie. CERN alumnus. Co-creator @SwiftPackages. Hummingbird app: https://t.co/2S9Y4ln53I
25: ChatGPT API exploration, SPI Playgrounds, Manifest validation, and package recommendations
Broadcast by