What's New in Rust 1.54 and 1.55
Episode Page with Show NotesJon Gjengset: Hello Ben, welcome back.
Ben Striegel: It’s good to be back, Jon. How are you doing?
Jon: I’m pretty excited, you know. We get to do two versions of Rust. It’s not every day.
Ben: Not every day, only once every three months, so four times a year, four days a year.
Jon: Definitely not a common occurrence. So I’m in a festive mood, I’m ready to celebrate 1.54 and 1.55.
Ben: Yeah. One of these releases is pretty short, so I think this first one here, 1.54, won’t take us long to get through. Do you wanna kick us off there, Jon?
Jon: I feel like we say this, and then we always end up spending, like, a ton of time on some really weird detail.
Ben: No, I promise. This time it is a pretty small release.
Jon: All right, let’s try. So, 1.54 starts out with “attributes can invoke
function-like macros.” And this one is, on the surface, pretty simple. If you
have an attribute macro, so this is something that starts with a square bracket
(editor’s note: starts with a #
)
and then an optional exclamation mark and then square brackets. If you have one
of those attributes, previously all you could do is sort of put fields in there.
Like you could write stuff that the macro then gets to parse, but now you can
put macro invocations inside of those attributes.
So the example that they give in the release notes is, if you— you can use the
include_str!
macro, which lets you give a file name, and the contents of that
file name get inlined into that place in the file at compile time.
Ben: We could mention here too, that the attribute used in the example is
the doc
attribute. which it— might be surprised to learn that there is an
attribute called the doc
attribute. Any time you’ve ever written a doc-
comment, that is actually desugaring to the doc
attribute. So the example here
is literally about letting you define doc-comments in a separate file, and then
include them into the appropriate place for rustdoc to generate all your nice
API docs.
Jon: Yeah. And I think this is worth digging into slightly more, which is,
if you write ///
, so a regular doc-comment, what you’re really writing is
#[doc="..."]
and then the text that follows the three slashes. Similarly, if
you write //!
for sort of a top-level doc-comment, it gets turned into
#![doc="..."
and that same text. And you can mix and match these, so you can
have some lines that are ///
and some that are #[doc]
. And that’s where this
feature gets really cool, where you could have something like— imagine you want
to have an example file, that you also want to use as a example in
documentation. Well, you could stick it in its own file and then you could also
include_str!
it directly into your documentation example in the right spot.
And so it’s really nice that these two can now be used together, because it
enables those kind of neat uses.
Ben: Yeah, it really helps enable the whole like, kind of mdBook pattern where you have all your markdown files in a nice directory with, kind of a top level table of contents, but also if you want to have those, you know, be included in all your API docs. Plus, it also helps if you’re reading code, having a lot of very long doc-comments can kind of get in the way sometimes of understanding what the code— like, just I feel like I know what the code does. I need to actually just read the code now. So, if your editor doesn’t have any like, nice collapsing mechanism for doc-comments, which they often don’t because they’re kind of not really, usually defined line by line. So you would need a pretty smart editor to actually make that happen. So it’s a nice little change.
Jon: Well I use ed
, is that considered a smart editor?
Ben: That— well, it is the standard.
Jon: Yeah, exactly.
Ben: That’s all that matters.
Jon: It’s great.
So the next thing that’s stabilized is wasm32 intrinsics, but what are intrinsics, Ben?
Ben: Yeah. So when we say intrinsics, what we’re kind of— there might be a few different ways of interpreting it, but in this case an intrinsic is sort of a thing provided by the platform. In this case the platform is the CPU. So in particular the intrinsics that got stabilized were for SIMD support for wasm, and maybe you’re like, wow, wasm has SIMD support, whoa! And that— it does, and it’s pretty wild, and I would love to, at some point, get a rundown of, like, how much crazy stuff wasm has, that we are just now learning about. But in this case, this kind of gives you a nice little API for invoking the wasm SIMD intrinsics. And so they are stabilized. They’re here.
And a question that you might have is, kind of like, well, if it’s just like running a thing on the CPU, like kind of a shortcut to a direct CPU command, isn’t that just what assembly language is? And you’re right, it is kind of just a shortcut for a single line of assembly. Although in some cases intrinsics can be a little better in that, in this case, some of them are actually safe, whereas assembly is always going to be just unsafe. It’s a block of like, totally unknown assembly— haven’t verified. In this case a single intrinsic can sometimes be safe. And it’s— actually, I’m not sure if honestly, any other ones that are currently stable are safe, but for wasm32 some of— many of these, actually, are safe to call. So that’s actually an improvement.
Jon: Yeah, and it’s also, with wasm, I don’t think you can write, like— I don’t think you have the ability to write literal wasm, like WebAssembly. And so there you can’t even fall back to using the assembly, so you do need these intrinsic functions to to provide access to, like, the low level primitives of the underlying platform.
Ben: It would be cool if you could write, like, wasm. I’m not sure like, if that’s like an inherent—
Jon: I feel like this is something we might get at some point, but I don’t know how, I wonder if there’s an effort— there’s probably an effort there.
Ben: This also brings up a slightly tangentially related point, which is
wasm32, as of 1.54 is also now a target family. So if you haven’t heard of
target families before, if you’ve ever written something like, conditional
compilation thing, like #[cfg(...)]
and then like unix
or windows
, you’ve
used target families. So target families are collections of target operating
systems. So for example, the unix target family, which unix
is a shorthand
for, includes Linux, but it also includes Android, for example. Similarly,
windows
includes all the different versions of Windows. And now wasm
includes all the different types of wasm that there are, they all grouped under
the target family of wasm
. It’s sort of a collector for targets.
We got another thing in 1.54, which is, we finally got incremental compilation back. So you’ll remember that in the previous episode, we talked about how incremental compilation was disabled, because there were some corner cases that weren’t handled correctly by the compiler. And those used to just be silently ignored. And now when they sort of enabled the way to actually detect them, they made them fatal errors, sort of by accident. And then they decided they actually should be. Now all of the relevant compiler errors that have been detected by that sort of incremental compilation sanity checking, all of those have now been fixed and so they’ve deemed that incremental compilation can now be re-enabled by default. There are still a couple I think minor ICEs that are very rare that are still being tracked, but it was— they sort of made the decision that at this point it’s correct to turn it back on, and basically no one will run into the corner cases. Happy to get this back.
Ben: Because this release came out about six or so weeks ago— longer, by the time you listen to this. We did go through the issue and— to see if anyone has registered any kind of new complaints, to see if maybe, like, you know, there was, it was premature, but it seems like there was pretty much nobody who has been noticing. At least not reporting any new problems with this, turning this back on. So hopefully, it all should be pretty good. And I think, I believe I saw a comment from one of the developers working on this, which is that even all the minor bugs should be resolved as of a recent nightly. So…
Jon: It’s exciting. It’s almost like things are getting better.
Ben: Well, a great way to make things better is to make them worse and then they get better.
Jon: We’re actually almost at the end of this release, so I guess you were
right that it’s fairly short. Although we do have some stabilized APIs and some
some more detailed changelog items to go through. For stabilized APIs we already
talked about the wasm32 intrinsics, which have gotten their own module under
arch called wasm32
. There’s some binary search things added to VeqDeque
— or
vec-deque, I don’t know how it’s pronounced— that we’re not really going to
talk about; they’re not super interesting.
But one thing that did catch my eye here is that BTreeMap
and HashMap
now
have into_keys
and into_values
methods, which was interesting to me, because
you could always do this with into_iter().map()
and to get out the keys and
the values, but it is nice that now you can just get the keys and values
directly. I think in practice, it’s really just sort of a terseness thing,
right? Like you can use fewer methods to achieve the same thing. And this— when
we talked about this before we started, you mentioned, Ben, that this sort of
gets at the size of the Rust standard library. Do you want to talk a little bit
about this?
Ben: Yeah. So, I mean in common parlance, I often see like, Rust’s standard library compared to, say, Python’s or Go’s, where people say, okay, it’s a small standard library. And I think that kind of misses one of the dimensions, which is that Rust, like— Jon and I both had this exact same conception, that Rust’s standard library is not extremely broad, but it is very deep, with the idea being that it does not have a lot of— extremely large amount of, like, different modules with different use cases for different kinds of libraries. But it does— what it does provide, if you do have a module there, it has tons of convenience functions. You can kind of just get lost, like strolling down you know, the iterator docs, of finding out, what’s all the weird kind of like, unknown methods, that I’m not actually using.
So if you ever wanted to increase your Rust intermediate knowledge, just go to the Rust docs and find a common API that you’ve used before, and look at all the fun little convenience methods on there.
Jon: Yeah, and all the trait implementations too.
I think at this point we can start going through the actual changelog. One thing
that jumped out at me was, cargo
gained a new report
sub-command, and it
currently only supports one kind of report, which is future-incompatibilities
.
So this one is for— basically a type of lint that’s like, you may want to fix
your code, because in the future this code may be an annoyance to you. Either
because it might turn into a hard error, because it should have been a hard
error all along— because it was really a bug, but we just haven’t made it one
yet. Or at a future edition boundary, this behavior will change. So we’re giving
you a heads-up now, in case you’re thinking of adopting the future edition.
And there are a lot of different things that are matched under this. So as an
example, the array into_iter
that we talked about, I think last episode, is a
future-incompatibility lint, that you’re relying here on this auto-ref behavior,
and in the next edition that won’t be there anymore. Or there’s like— there’s a
lint for constants whose values are impossible. So for example, if you declare a
constant whose value is like, one divided by zero, then like, it used to be that
you could write Rust code that declared such a constant, and it would only be an
error when you use the constant, and not when you declare it. But in the future,
it’s probably going to become a hard error to even declare one. And so this is
an example of a future-compatibility lint, where you can do this today, but it’s
really a bug, and in the future, it’s going to become a hard error. And so this
is like, a way for you as a user to be like, I want to be aware of things that
are coming down the pike. Please Cargo, tell me about them.
Ben: Yeah, we should clarify too, that— so normally all of these
incompatibilities are just warnings, and you should be seeing them by default in
your own code. But what cargo report
, I believe is doing, is that it is
showing warnings on your dependencies. Normally, Cargo does not display warnings
that are in your dependencies. So for example, this could be a problem if
there’s some kind of future incompatibility in code that you— a crate that
you’re using, but it’s not printing the warning. So you have no idea if you’re
supposed to be upgrading, or if it will break under you, which actually— the
fact that Cargo hasn’t been doing this is one of the reasons why a lot of these
incompatibility lints haven’t actually progressed very far, and some of them
have been open for a few years now, just because Cargo hasn’t really been up the
task of warning users that their code might break. And so hopefully, this
represents a step towards making it so that we can actually move forward on
these incompatibility lints that have been kind of languishing for years and
years now.
Jon: Yeah, and part of the hope too, I assume, is that because you get to see it in your dependencies, you can then go and try to fix them. Like you can submit either an issue or a PR to your sort of upstream dependencies and go, hey, I was told about this lint from Cargo, here’s a thing that just fixes it so that your code won’t break in the future.
Ben: One more little thing that I noticed— not in the release notes, but
secretly, is that 1.54 has turned on the mutable-noalias
flag once again, by
default, on certain LLVM versions. And so we’ve maybe spoken about this before
in the past. Basically this has been— normally this would not really be big
news. It’s kind of just like, it’s one optimization flag among like, you know,
thousands that LLVM applies to various bits of Rust code. In this case, kind of
famously, it’s kind of a comedy of errors, where LLVM hasn’t really exercised
these code paths to the extent that Rust uses them. And so every time Rust turns
this flag on, some more miscompilation or error arises, forcing Rust to turn it
back off again. And so, in fact this— it was tried to— someone tried to turn it
on for 1.53, I believe it was, and then a new error arose and actually, faster
than ever the problem was solved upstream, and incorporated back and tested, and
it has been turned back on for 1.54. As usual— historically, we shouldn’t expect
this to remain on for long.
Jon: And it’ll never be turned off again, now.
Ben: Historically that is not a good bet. At some point, you know— normally, it might take a few releases but, I mean, it’s— at some point you would hope that it’s— that there won’t be any more miscompilations detected. I mean, obviously if there were any known, it wouldn’t be turned on, right? And so it’s— we can be cautiously optimistic at this point that maybe they’ve all been found.
Jon: I think it’s a slam dunk now. Because it’s been turned on and off before, and once you turn something off and then on again, then you know that it works. That’s what I’ve been taught.
Ben: It seems like they’re getting easier and easier to fix each time. Like, turn-around time for fixing these bugs seems to be getting smaller and smaller, which seems to indicate that the fundamental problems that LLVM has, and like, you know, tracking this data have been getting better. You know, they’ve been getting better and better at making it— better tests, that kind of stuff. And it just— so I am optimistic at least that maybe this is the time. Maybe we’re finally done on this merry-go-round, and we can finally get off.
Jon: I think I have two small things at the tail end here. One is that
there’s a new environment variable that’s available for integration tests and
benchmarks called CARGO_TARGET_TMPDIR
. The idea here is that if you have tests
or benchmarks that need to generate temporary files, for like— either for
output, or generating some input and then running over it, you can now, rather
than write it to, like, /tmp
, you can write it into CARGO_TARGET_TMPDIR
. And
this is nice for a couple of reasons. One is that you respect the sort of user’s
choice, of where compiled artifacts should go, like if they’ve set
CARGO_TARGET_DIR
, for example, this will be under there. It also means you
don’t have to hardcode anything like /tmp
, or maybe use separate crates in
order to manage this, you can just sort of dump them in there and know that
they’ll be private to your particular compilation, and it gets cleaned up by
things like cargo clean
. So it’s like a nice thing to be able to use.
The other is that Cargo unbelievably now has switched to version 1.0 of the
semver
crate.
Ben: The sheer irony.
Jon: Yeah, it’s great. So the semver
crate is basically the crate that
parses version specifiers, whether that is a crate version, or a dependency
version specifier. And then matches the two up, right? So if you say this is
serde
version 1.0.2
, or if it’s saying, I depend on serde > 1.1.3
.
semver
is the crate that parses both of those, and checks whether a given
version of the dependency matches the dependency specifier in the consumer. And
this is like— there’s a surprising number of weird corner cases, because there
always is when you’re doing parsing and matching, and David Tolnay has taken on
this like, herculean effort of rewriting the semver
crate, which was at like,
0.10 for the longest time, basically rewriting it from scratch, and then walking
through all of crates.io, like all versions of all crates, and seeing that the
new version of semver
matches the behavior of the old version of semver
for
everything that cargo cares about.
And this is important, right? Because if Cargo upgraded to the new semver
and
suddenly the semantics of a bunch of version dependency specifiers changed, lots
of things could just break in unintuitive ways. So it was a lot of effort to get
to this seemingly small version change. But it makes me really happy that we got
there. semver
is such a fundamental crate, that it’s good that we’ve gotten it
to 1.0 now.
Ben: And is that all we have to talk about for 1.54?
Jon: I think so. I think we’re onto 1.55.
Ben: All right. First up in 1.55—
Jon: Greatest release so far.
Ben: Yeah, it’s certainly the biggest number. Cargo now de-duplicates
compiler errors. So I think this is just a nice quality of life thing. So Rust
is, you know, a multithreaded application, and so in certain contexts you might
get duplicate errors or duplicate warnings even, if you’re doing cargo check
or cargo test
, that sort of thing. And so now those should be gone. And so I
think this is a great change, because sometimes it’s kind of weird to see, you
know, it printed one error at the end of your compilation; you see like, you
know, two or three different— the same error printed above it and it’s kind of a
little bit off-putting. And so that’s a great little quality of life thing.
Jon: I don’t think this has ever affected me, actually, because my code never has warnings or errors. But it seems good for people who make mistakes.
Ben: Yeah, that’s definitely— I’m humble enough to admit that I do. It’s once in a while. I did make a mistake once, at least a month or two ago.
Jon: I see. You made one mistake, I see.
The next one is cool, and I’m glad it made it into the release notes. So this
is, Rust now has faster, more correct float parsing. And I really recommend
reading through the PR for this, or at least the summary for it. And there’s a
good Reddit post as well that we’ll link to in the show notes, of
just like, what this change was. But essentially, it’s a move from— or a move
from a somewhat old and slow algorithm for parsing floating point numbers— like,
from their textual representation into their encoded f64
/f32
form. To a new,
sort of state of the art parsing mechanism, that’s like, 10 times faster for
certain data sets. And you just wouldn’t expect parsing floats to be as hard as
it is, or for the performance improvements to be the kind that they are. But
it’s just like, it’s so intricate and so cool that it can be sped up this much.
Ben: Some of us do expect that floats are always harder than they seem.
Jon: That’s fair. That’s fair. But it— but I feel like I just never really thought much about this problem of turning a floating point string into a number, being like, an involved process. But of course it is. Like, once I think about it, of course it is. But it’s just not a thing that I thought about.
Ben: And these are not just performance improvements. Although those are nice. It also is a correctness improvement, because the old float parser could not parse various strangely formed strings. It would just give you an error, a parsing error, if you even tried. So it is both faster and more correct, and it’s great.
Jon: Yeah, I think I saw the PR closes like, was it like, eight different known issues with the old parser. It’s great.
The next one is a somewhat different kind of of change, which is that the
ErrorKind
type, or enum, under std::io
has its variants updated. So
ErrorKind
is an enum that’s marked as non_exhaustive
, which lets you
classify an enum type as, not being possible to exhaustively match on or
construct. The idea here is that the standard library authors want the ability
to add new variants over time. Right? So for example, ErrorKind
used to not
have an OutOfMemory
variant, but there are, sort of, operating system errors
that indicate “out of memory”, and you want the ability to propagate that
accurately, as the right ErrorKind
. But because it didn’t exist in the past
and they want to add it, the only way to do that in a way where it’s not a
backwards-incompatible change, is to mark the enum as being non_exhaustive
so
that no one can have, for example, a match
on ErrorKind
that tries to
enumerate every single case, and not have an underscore clause. Because if they
did, if they could have such a match
, then adding a variant would be a
breaking change, because that match
would no longer compile, complaining about
the missing new variant. But in practice they haven’t really been adding new
variants to ErrorKind
. And I don’t really know why, maybe they just didn’t
have a cause to, but that’s something that’s changing now, and it’s changing
partially because there’s this one variant called Other
, which is used in the
standard library to express errors that couldn’t be classified yet. So this is
stuff like, “out of memory” for example in the past, where it didn’t really have
an error variant, and they didn’t— an ErrorKind
variant. And they didn’t add
one; it just got grouped into this Other
category.
The problem was that the Other
variant is also accessible to users, and so a
lot of, sort of, crates in the in the crates.io ecosystem and in the broader
ecosystem, were creating errors using this Other
variant. And then consumers
of those errors were like, trying to match
on the Other
ErrorKind
, and if
they were that, then they assumed that it was a particular kind of error coming
from below, and this just started blurring the lines between what is really an
error from the operating system, and what is, like, a user generated error. So
what they did in 1.55 was, they added a new variant called Uncategorized
. And
then they changed everywhere in the standard library that generated an Other
ErrorKind
previously, to generating an Uncategorized
ErrorKind
instead.
And the crucial difference between Uncategorized
and Other
is that
Uncategorized
is a variant that cannot be named by user code. So you cannot,
in user code, generate an error that has the Uncategorized
ErrorKind
. So now
going forward, Other
, the Other
variant, will always be known to be user-
generated errors, and Uncategorized
will always be known to be sort of system-
level errors. And I think part of the reason they did this was so that it’s not
a breaking change, or it’s not as disruptive of a change, I should say, for them
to change the ErrorKind
of an existing error from Uncategorized
to for
example, OutOfMemory
. So in the past, right, someone might have relied on
OutOfMemory
errors being indicated as Other
. But if that’s now going to be
under this unnamed or non-nameable Uncategorized
variant, no one can rely on
OutOfMemory
being categorized that way, because they can’t name that way in
the first place. They can only find that using like, the underscore match
pattern. So this opens the door for them re-categorizing many of these errors,
without being disruptive, or without having to rely on the fact that someone
might expect it to have its current ErrorKind
classification.
Ben: We should emphasize too, that this is only for the io::ErrorKind
.
This is not, like, a broader error type or trait; this is only for the io
module.
Jon: Yeah, and the io::Error
is a little bit special here, in that it’s
used a lot in user code, where you just propagate I/O errors up through an
entire stack. And so you sort of need this Other
variant, in order for code to
be able to sort of inject errors into that stack, that don’t really— that are
io::Error
s but they just don’t directly map to any particular existing
operating system ErrorKind
. And so I do think it makes sense to have the
Other
variant here, but it’s also the case that you need this other variant
that can’t be named, so that it’s not as disruptive to change the ErrorKind
of
an existing error.
And in fact this is documented on the Other
ErrorKind
. Like, if you if you
look at it from like, 1.52 for example, it says that, for the Other
ErrorKind
variant, it says, “errors that are Other
may move to a different
or new ErrorKind
variant in the future. It is not recommended to match an
error against Other
and to expect any additional characteristics.” So this was
already the case for Other
; it’s just now they’re formalizing it a bit more
and sort of enforcing it a bit more.
Ben: And in fact several APIs have already begun using Uncategorized
from
Other
, so like, you know, they are no longer producing Other
; they’re now
making Uncategorized
.
Jon: Yeah. And in fact, I think they went through the standard library and
changed everywhere that that currently produced an Other
to now produce
Uncategorized
instead.
This next one is sort of a small change seemingly, right? So this is “open range
patterns” in match
statements. The idea is that if you match on, say, an
integer, you can say, I want to match on 0 to 4, and then I want to match on 5
and above. Previously you could always match on ranges, but only if they were
closed. But now you can say like 5..
or 1..
, to say “this and everything
above” or “this and everything below,” which really is a thing that I’m guessing
people just expected to work in the past, and it just wasn’t implemented. And
now it is, and that’s a— it’s just like a nice change to make things less
unexpected, maybe?
Ben: Yeah, it’s not— I don’t think it’s technically anything that you
couldn’t have achieved previously with an underscore pattern there, but it is
nice to have code, be more self documenting and not rely on— Because it is kind
of like, you know, if you’re ever writing a match
statement, you do notice
from time to time that there’s kind of an implicit semantics to the order in
which you make your match statements— the arms of your match statement, where
like, different conditions might overlap. In this case, you know, if you— the
example given is kind of like, if you have a 0
in this number, you print
zero
, you have 1..
, you print positive number
. If you had instead written
_
instead of that 1..
and then if you ordered that arm above, then it would
have different semantics. But if you have 1..
there, it’s actually— it’s both
self-documenting and it’s resistant to any kind of like, ordering that you might
impose. So it’s just a nice little quality of life thing.
Jon: Yeah. There were a bunch of things that were new stabilized APIs in
1.55 too. I think the one I want to talk about first is, MaybeUninit
gained a
couple of new methods. So MaybeUninit
is a— I think we’ve talked about it
briefly in the past. It’s a type— so MaybeUninit
is generic over a T
. And
the idea is that it holds a T
that may not be valid yet. So for example, it
might hold a pointer or a reference that’s currently all zeros, which is not a
valid reference. Or it might hold a Box
that doesn’t actually point to a heap
allocation yet. And so the idea with MaybeUninit
is that you create one, and
then you sort of write into it. You write the appropriate bits into it to make
it valid, and then you call assume_init
, which is a method that already
existed, in order to get the T
now that it is valid. So it sort of lets you
keep a value in this sort of undetermined or not yet valid state, which
otherwise isn’t legal in Rust. You’re not allowed to have, say, a Box<T>
that
doesn’t actually point to a T
.
And what was added in 1.55 were assume_init_mut
and assume_init_ref
, and
these let you take a MaybeUninit<T>
and give you a mutable reference or a
reference to that T
, assuming that is now initialized. And this might seem a
little odd, like if it’s initialized, why shouldn’t I just take ownership of it,
and then I can borrow it afterwards. And the idea here is that there are some
types where you might want to construct a valid version of one, but it wouldn’t
be valid for you to take ownership of it. An example of this might be an aliased
Box
. So one rule of Box
is that you’re not allowed to have two owned Box
es
that point to the same heap allocation. It’s assumed that every Box
that you
own, you own the underlying heap allocation as well. But if you create a
MaybeUninit<Box<T>>
, then you can alias that, because it’s behind
MaybeUninit
, so you don’t need to follow the validity rules, but you couldn’t
call assume_init
on it, because if you did, if you called assume_init
on
both of them, you would now end up with aliased Box
es, which is not legal. But
with assume_init_ref
you can call that and get a reference to the Box<T>
,
and that’s okay. You can call that on both of them because the borrowed Box
is
not claiming that it has ownership, and therefore is valid to have multiple of.
That’s a long winded way to say that this is— this enables you to have more
MaybeUninit
types or more use of MaybeUninit
for types that are valid to
have references to, but not valid to take ownership of.
Ben: This next one, this next stabilized API is actually born out of a
broader initiative. So you’ve probably heard of the question mark operator in
Rust. And in Rust various operators can be overloaded. Question mark operator is
not currently available to be overloaded. That’s been something that’s been
under debate and— been working on for a long time now. And earlier this year, I
believe it was scottmcm— I hope I have that right— went through and wrote a
brand new RFC, a redesign for what it would look like to overload the question
mark operator. And this is known as the Try
trait. And it just so happens that
one of the aspects of the Try
trait is this new— it’s also in the ops
module
like Try
will be, it’s called ControlFlow
. And Jon, do you want to talk
about what’s cool about ControlFlow
?
Jon: Yeah, so ControlFlow
tries to sort of embody in the type system one
particular aspect of Try
, which is, do you want to break or do you want to
continue? So the idea here is that if you look at something like— well, if you
look at the question mark operator, what the question mark operator really is
saying is, if this is an Err
then break. Otherwise, continue the control flow
below the question mark operator, with the sort of unwrapped value. And the same
is if you have a question mark on Option
, right? If you have a None
then
break, as in return with None, otherwise continue with the T
that was inside
of the Some
.
And ControlFlow
is essentially a type for encoding that decision. And in a way
that’s not tied to the return
keyword, right? You can imagine that if you use
a— let’s say use a question mark, inside of a— well, if you use it instead of a
function or an async block, then it does mean return
, right? If you, if you
get the break case you return, but you could imagine that there are other cases
where that’s not really what you mean, like you might want it to break a loop,
for example, in certain contexts. And ControlFlow
tries to sort of abstract
away just that concept of a decision to continue or break.
Now there’s a larger discussion in the RFC of how this ties into the Try
trait, and we won’t go into that too much yet, because it’s not stabilized. I’m
guessing it will be stabilized, probably fairly soon. The design is pretty neat.
But it basically relies on having a type that you can turn into and convert from
these control flow decisions. And you can imagine that it’s useful in other
contexts just with “try”, so for example, imagine that you have a— on
Iterator
, you have a try_for_each
method. Well, try_for_each
could take a
closure that returns a ControlFlow
to dictate whether the for_each
has
completed, and if so, completed with what value, or whether it should continue
iterating, and if so, what is the sort of continuation parameter. Like, what
should be passed into the next iteration of the closure. This is just a nice
type for concisely expressing that notion of continue and break. And it’s a
stepping stone into getting to the actual Try
trait.
There was one other stabilized API that is not super interesting, but I found it
a little curious, which is Drain::as_str
. So this is— so you may realize that
if you have a vector, you can call drain
on it in order to remove all the
elements indicated by the range that you passed to drain
, but keep the vector
otherwise intact. So for example if you have a vector of length five, you could
say, I want to drain elements 2 through 4, and it gives you an iterator of the
owned elements from 2 to 4. And what it leaves behind in the vector is elements
one and five. Those are both left in the vector and they’re sort of shifted
around so that they remain together. Well, strings, as in String
, also have a
drain
method that do sort of the same thing. You give it a range of characters
inside the String
, and you say, drain those characters out of the String
,
and then leave the remaining characters intact. And when you call drain
you
get back a new type called Drain
, and that’s the thing that implements
Iterator
. And that Drain
type now has gained a method called as_str
. And
the idea here is that if you drain characters out of a string, the Drain
can
give you a str
reference to the characters you have yet to drain. And that’s
what was added.
And it sort of makes sense, right? Like, you’re removing a— you’re basically
removing a str
from a String
. And so if you’ve removed some of the
characters, this lets you get at, what is the remainder of things that I haven’t
removed yet, as a str
. It’s just like a— it gives you an interesting insight
into what what Drain
does, and sort of, the uses that you might have for it.
So I think that’s all we had for the stabilized APIs. So I think all that’s left
is my usual deep dive into the into the changelog. And there’s not too much of
interest there, I think, for 1.55. There is one change, which is the build
scripts are now told about RUSTFLAGS
and the rustc
wrapper and stuff. So
this is, if your cargo configuration includes sending additional flags to
rustc
, or has a wrapper around rustc
, previously build scripts weren’t
informed of that, and there are some crates that use build scripts to do things
like, determine whether they can use a nightly feature or not. And so they would
break if you had Rust flags that they didn’t take into account. That’s passed
in, which is kind of nice.
Another one is that we now have cargo clippy --fix
stabilized. So we’ve had
cargo fix
for a while, for things like edition changes. But now— or even just
like if the compiler can automatically fix a given warning or error, I think you
can pass, like “cargo dash-dash-fix”
or just cargo fix
, and now we have the same for
Clippy. So if Clippy detects a lint that it thinks it can— it has like, an
automated fix for, you can just call cargo clippy --fix
and it will fix those
for you. There’s also, speaking of Clippy, David Tolnay has made this huge pass
on all of crates.io to find Clippy lints that people were ignoring. Like on
purpose, were marking as like #[allow()]
this clippy lint. With the idea that
maybe some of these lints just shouldn’t be on by default. Like, maybe the users
are telling us something here. And I think this is a really cool effort because
it means that the clippy lints are getting better over time at caring about the
same things that Rust developers care about more generally. And really all this
is saying is, maybe consider removing some of your allow
clippy lints, because
the lints are improving, and also the defaults for what is allowed are
improving. And if you override them with allow
you may actually be missing out
on important changes that don’t have false positives that were hitting you in
the past.
I think the last one I had, was that rustdoc has gained this neat new feature,
where if you set #[doc(hidden)]
on a trait implementation, that trait
implementation will not be shown in the list of implementations of traits for
that type. It doesn’t mean that that implementation doesn’t exist, it just means
that it doesn’t show up in the documentation and doesn’t clutter there. I think
the use for this is like, if you have a type that implements lots of traits and
those— the fact that it implements the traits isn’t important, maybe because
it’s an internal trait or something, or because you don’t really want users to
rely too much on it implementing this trait. Now in practice, just because you
mark it as #[doc(hidden)]
doesn’t really make it okay to remove that
implementation, and not have it be a breaking change. But it does mean that at
least you can make your generated documentation match your documentation
elsewhere, about what users can and cannot rely on, a little bit better.
I think that’s all I had for 1.55. Did you have anything else, Ben?
Ben: No, that’s good for me too.
Jon: That’s amazing. We’ve made it to such high numbers, now. I think we have the high score.
Ben: Well, I mean, think about it. Like, at what point Java for example, I think it got to maybe like 10 or 11 or 12 before it just said screw it, we’re no longer doing one-point-anything. We’re just java 12 now. So should we just start calling it, like, Rust 55, Rust 56?
Jon: It is kind of tempting.
Ben: I mean the idea that we don’t, is kind of like, to emphasize, hey, we haven’t like, you know, like we’re still committing to you know, all the breaking change commitment promise from the original 1.0 release. So it could be a problem if in the future there ever was a Rust 2.0, because then we’d have— you go from Rust 1.92 to Rust 2.
Jon: You heard it here first, folks, Ben is announcing Rust 2.0.
Ben: My prediction. I mean, take the over/under.
Jon: As we all know, Ben is the the dictator for life of Rust.
Ben: The secret illuminati has appointed me, yes.
Jon: That’s right. And he has now declared that Rust 2.0 will happen. So be careful, everyone.
Ben: You’ve got a while, though. Rust 1.92 is a while away.
Jon: Is it? I mean, we’re like, over halfway. It’s scary stuff. Rust is getting old, man.
Ben: Rust is so old, now. It’s like, it was like six years ago, since we released.
Jon: Yeah, that is crazy.
Ben: I say we, like I always do. I was there, though. I was there.
Jon: See I was expecting you to say “I,” but I’m glad that you’ve leaned into your shadow puppet master—
Ben: Being part of the Rust illuminati, it’s a group effort.
Jon: That’s right.
Ben: I couldn’t I couldn’t shadow-control the language all by myself. I gotta give props to all the other folks here, sitting in their chairs smoking their cigars, with their faces in shadow.
Jon: See this is you just throwing out the smokescreen. We all know it’s just you, Ben.
Ben: Mm hm. I am many. I am legion.
And with that note, let us end this podcast.
Jon: Sounds good. All right, I’ll see you for 1.56, Ben.
Ben: See you then. Oh, should we foreshadow what’s going to happen? Next time—
Jon: Oh yeah, let’s do it. Let’s do it.
So in 1.56, and this is super secret. You didn’t hear it from us. In 1.56 we’re going to have a new Rust edition. But don’t tell anyone.
Ben: Mm hm.
Jon: But it’s very exciting. I’m excited. Are you excited?
Ben: I’m pretty excited. I mean, we did tell you about this last time, a little bit. So I think we’re going to not do it three times in a row. We’re going to only do it, like you know, give you a brief reprieve. But next time expect plenty of edition-related goodies and tangents.
Jon: Are we going to try to do a variant of Nico’s edition song?
Ben: Oh, we could. Maybe we should— we should definitely rehearse. Not right now, but next time the outro can just be us singing the edition song.
Jon: That is pretty tempting.
All right. See you for the 2021 edition in 1.56, in— 12 weeks from now.
Ben: Farewell, folks. Stay safe out there.
Jon: Bye!
Ben: Bye.