I’ve been watching a few videos recently about the impact of Apple’s move to its own ARM ISA based silicon and the impact that change will have on the rest of the industry. Gary Explains goes into great details regarding the transition from a ‘top level’ perspective:
Just as Apple set trends in the past such as removing the floppy disk, turning USB into the ‘the one connector to rule them all’ with the launch of the first iMacs, the move to ARM will force others within the industry to look at taking ARM as a serious contender to x86. The one thing to keep in mind is that the various ARM vendors don’t necessarily have to meet or beat Apple’s own silicon but as long as they can produce an SoC that can scale up and beat Intel when it comes to power/performance (achieve the same level of performance as Intel but consume less power). Fujitsu for example has their Fujitsu A64FX as a replacement for the SPARC V which shouldn’t surprise me if in the long run we start seeing Oracle reconsider whether it makes sense staying with SPARC or whether they’re better off moving to ARM and working with Fujitsu.
The one thing I am reminded of was what the former CEO if Sun Microsystems said when Solaris came back to the x86 platform where he noted that the world runs on GET (Good Enough Technology) – “it ain’t pretty but it gets the job done”. This is the reason why, as long as the ARM alternatives good enough performance wise but beat Intel when it comes to power consumption then it’ll be good enough for the vast majority of end users.
Regarding software, if the rumours about Windows 10X turn out to be correct then there is a good chance that we’ll see ARM based ‘ChromeOS’-like devices being made available – win32 support appears to be added (link) although I could imagine the various frameworks that make up ‘Project Reunion’ (link) forming the modern foundation for Windows going forward. Microsoft (in the linked article) talks about Windows 10X being side by side with Windows 10 ‘Classic’ but I think long term what we’re going to see is win32 either delivered in the form of cloud based virtual desktop or possibly in the form of a virtual machine but what ever the case maybe long term they appear to be wanting to unburdern themselves of having to support legacy code considering that long term revenue and profits will be lower thus the cost of maintaining the code base will need to be reduced. Regarding whether ARM based Windows Server will be something available to the market or whether it’ll be a cloud only facility – I guess only time will tell but given the direction of cloud computing and changes in laws in many countries which now allow cloud to be available for industries that once considered cloud computing ‘off limits’, it’ll be interesting to see whether ‘on premises’ servers have a future.
As for what might be in the future:
ARM based games consoles, desktops, workstations etc. might be on the cards – maybe even Intel reentering the ARM market because it is clear they have the engineering talent but the ISA itself is becoming increasingly an impediment to improved efficiency (variable instruction length being one factor which makes optimisation a lot more difficult). This video goes into a lot more detail than what I could do the topic justice:
The other part of the equation is Clang/LLVM (or LLVM combined with the other various frontends depending on the language one wishes to use) which appears to be gaining greater traction when compared to the GNU toolchain which was pretty much the industry standard open source toolchain up until that point. What I think will be interesting will be the work that Google is doing to create a ground up replacement for libc where the goal is to write it in pure C rather than falling back to assembly for the sake of performance which will hopefully mean a greater focus on improving code optimisation when compiling so that not only will this new libc performs better but also all C based code perform better thanks to the investment made. With that being said, I think the interesting part will be what Apple does with the kernel with its removal of KEXT for third parties – I wouldn’t be surprised if we ended up seeing Apple moving as much stuff out of kernel space as they can given that the cost of context switching (in terms of performance hit) is a lot lower on ARM than on x86-64.
Christmas appear to have come early with Ubiquiti releasing the 1.8.4 firmware update for the Unifi Dream Machine (both the base and pro models) which bumps up the version numbers of the various components plus bug fixes. For me I am pretty happy with how my UDM is behaving how that they’ve stabilised the ‘base’ build which the UDM standard uses – in the new year I’ll look at moving from Skinny to Spark given that there is next to no difference in price plus it enables me to have everything on the same bill. The other thing I am looking forward to is Ubiquiti releasing a Wi-Fi 6 access point (the UniFi 6 Long-Range Access Point looks like a good replacement) to replace my existing access point as I eventually look long term to replace my MacBook Pro and iMac along with upgrading my Apple TV to the much rumoured replacement which will also have Wi-Fi 6 support.
Rumours are circulating about Apple working on their own modem but at this point I think that is more or less pointing out the blatantly obvious. It wouldn’t be surprising if, when Apple bought the Intel modem devision, they were already working on 5G support and then later to flag it because they didn’t believe the amount required to develop it and subsequent sales would marry up. With all that being said, the one thing I do one is whether there is a long term viability for supporting 5G on frequencies about 6GHz given that all the international deployments of 5G have been on the sub-6GHz band with the instances of mmWave (higher than 6GHz) being used for point to point fixed wireless internet which Qualcomm recently tested in Australia (link). Don’t get my wrong, I think that mmWave for point to point fixed wireless internet could be a great alternative for those areas that want ‘fibre-like speeds’ but it is uneconomically viable to run fibre to their premises but I simply don’t see mmWave being a useful technology on mobile phones other than it being used as a tick box and a something for the non-technologically inclined to boast about when comparing their phone to work mates phone when they’re chatting around the water cooler at work.
Over the last few days I’ve been going for a ride around where I live and I am amazed at the number of new houses that are bing built. For example, hearing towards Taita on the land where an old church used to be there is a new housing development, and chatting with my sisters partner he is talking about how they’ve got more jobs than people able to finish it. Then there is the recent announcement of the government looking at reorganising local government regarding water (link) along with the fact that decades of underinvestment has left an infrastructure deficit that is only going to get worse as there is great urbanisation, existing sections are subdivided resulting in infrastructure designed for a set number of houses now resulting in more houses in the same plot of land making greater use when compared to what it was originally designed for. It’ll be interesting to see whether that consolodation and will translate into greater investment now that the government can leverage the balance sheet of the central bank to upgrade the infrastructure. Another thing I do hope they do is buy the council housing off councils and put it under the stewardship of ‘Kāinga Ora – Homes and Communities’ (formally known as Housing New Zealand) – councils being able to reduce their debt burden by taking a maintenance item off their budget while still ensuring that those who need social housing are able to access it.
With Apple starting to ship their own SoC based on the ARM ISA there is a growing Microsoft developing their own ARM SoCs and rumours that other big players such as AMD and Intel maybe looking at getting into developing their own range of ARM based SoCs which will licence the ARM ISA but will use their own custom architecture (much like what Apple does). The rumour regarding Microsoft, at least in my point of view, is probably off base in that I don’t see Microsoft creating their own custom ARM based chip like the following article (link) but I could imagine Microsoft teaming up with Qualcomm in much the same way that Microsoft teamed up with AMD to create the custom chipset found in Xbox.
There is a lot of potential for Microsoft but I think the ‘great undoing’ will be Microsoft’s refusal to deal with the decades of legacy and use ARM as an opportunity to jettison old technology and create a modern platform (see my post regarding the EdgeOS concept) that optimised for the ARM platform then gradually expand it beyond ChromeOS competing devices into a future ARM based Xbox, ARM based servers etc. One understands the need to maintain backwards compatibility for the short term to allow a smooth transition but the damage to the platform is a lot greater when there lacks any sort of coordinated platform to move a platform forward through having a methodical process where old technologies are deprecated then the support is removed from the SDK (but the binary can still run) and then eventually the removal of it from the platform. If there is a methodical process then companies can plan for the future – they know what is going to be removed, the replacement is either announced or Microsoft says that they’re not going to provide such a technology (they might decide that it is outside the scope of an operating system vendor due to it being very much a niche resulting in the cost vs. benefit not balancing up).
It’ll be interesting to see what Oracle ends up doing given that Fujitsu has put an end to its own SPARC range of CPUs in favour of their new ARM based CPU that powers their latest super computer (link) or whether they decide to go ‘all in’ with x86-64. When it comes to Apple’s own servers – will we see Apple make use of its own SoCs within their own servers – maybe taking macOS core and scaling it up around their own ARM based SoC then matching that with a powerful neutral processor which will help with machine modelling particularly as Apple invests into new cloud applications.
As a year draws to a close it is interesting how the ‘Plan B’ and ‘we should be more like Sweden’ have been quiet we have been enjoying the hard won freedom of eliminating community transmission of COVID-19 which enables us to enjoy the holidays. I always find it funny how the ‘our plan could have done it better’ never eat that much needed humble pie and I don’t expect them to do that any time soon because it appears that those who are given a platform a quick to make claims but very slow (if at all) to show up and admit they got it wrong.. I look over in Europe at what is unfolding – Sweden which has a population twice the size of New Zealand, has had 7993 total deaths which translates to 3997 based on population when compared to New Zealand’s actual total being 22. For those who say, “but your population is 5 million spread out” – incorrect, 60% of New Zealanders reside in 4 cities, 90% reside in 8 cities – in other words, we’re hardly ‘spread out’ and instead very much an urban population.
The ‘Plan B’ types also said it would destroy our economy – well, we had two quarters of contraction (Q1 -1.6%, Q2 -12.2%) followed by a massive rebound (+14.0%) so I’m sitting here wondering once again whether we’re going to hear the doomsayers and ‘Plan B’ types come out and eat a good helping of humble pie especially when one considers what is happening in the UK with the new strain and a second wave going across Europe, the outbreak in NSW (Australia) and continuing disaster that is occurring in the United States. Disaster unfolds and the very people who have been spreading false information disappear the moment that they’re meant to face the music. I hope that this is a warning in to the public in future to remember those who reported news (RNZ) and those who were more interested in pushing a narrative (NZME, Newshub etc).
I know it all sounds kind of negative but it is rather frustrating when those who are given platforms and as a rest result of that platform the ability to sway public opinion on a given matter, I think it is important that those who do spread false information are held accountable even if it is purely in the ‘court of public opinion’.
Watching a video about Chrome once again causing pain to end users but this time it is in the form of the updater that resides in the background but resulting in high CPU utilisation by WindowServer. Rather than this being an example of a bug, I feel it is an example of a much larger issue at play which goes right back to when Google decided to take webkit in a different direction with the forking of the code thus creating the ‘Blink’ browsing engine which sits at the heart of Chrome.
It comes down to two areas where webkit and blink diverge. The first being that although Chrome was the first to deliver a multiprocess model that was delivered via ‘bruit force’ when compared to Webkit2 which bought the multiprocess model to the Webkit platform (link). The net result of the design differences translates to a browser that is light weight whose memory usage doesn’t go through the roof the more tabs that are open.
The second area is the differences in developing their code base, from the outside it appears that Google has a ‘add features now, sort out the lack of optimisation later’ where as with Apple, there are constant benchmarks after each nightly compile to check whether there not only have been compatibility regressions but also ensure there have been no performance regressions. Although Apple hasn’t added as many HTML5 features as Google the upswing is that when they are added they don’t bloat Safari meaning that features are added but there is no (or not noticeable) performance hit.
Back to how things are going in my life – I’m 90% of the way there getting holiday presents sorted for my nieces with now the only thing left to do is wait for the desert to he delivered on Tuesday next week and once that arrives I’ll buy some alcohol for the BBQ and get together with family. The eldest one is easy to buy for, the youngest is also easy to buy for but the middle one – there is a reason why I prefer being the ‘cool single uncle’, once the middle one gets old enough then I’ll start giving vouchers.
One thing that I wasn’t expecting this morning was the release of updates from Apple – iOS 14.3 (which updated the modem firmware from 1.14.06 to 1.31.03-05 with the Spark carrier settings updated from 44.0 to 45.0. So far things have been very reliable. When it came to macOS and tvOS (along with iOS) I did a clean install and everything is very robust.
Before heading into the weekend Apple released the release candidate (of what will eventually be released to the public, assuming there are no show stopper bugs) of iOS 14.3 and macOS 11.1 (along with all the other Apple platforms). If they are going to release it next week then it’ll most likely be on Wednesday New Zealand time (Tuesday California time). So far things have been pretty stable apart from a really weird quirk on macOS 11 which will have playback hiccups when playing YouTube videos but it doesn’t occur on any other service. Oh, and another interesting thing is on tvOS with is that with vp9 for the video CODEC it uses the Opus CODEC for audio but on macOS 11 it uses vp9 for the video CODEC but AAC for the audio CODEC.
It is interesting to see how things are working out post M1 release with Qualcomm talking about the release of the M1 being the start of a transformation within the IT sector. Personally, I think that Qualcomm could develop something that gets close to what Apple offers in that it would beat Intel but will be behind Apple Silicon but the big problem is Microsoft. If Microsoft really want to make the ARM market mainstream then they need to get their software sorted out and they can do that by launching a legacy free version of Windows but don’t call it that.
What I mean by that is that Microsoft should take Windows 10 and strip out all the legacy – hardware accelerated scroll bars have no place being inside a kernel. So after stripping the kernel back to the essentials I would then replace the win32 user land with a BSD subsystem that is made fully UNIXv7 compliant – yes, with pthreads and all that wonderful goodness with the icing on the cake being that it embraces LLVM’s libc++ with a long term goal to replace the BSD libc with the llvm libc that is being developed along with using the LLVM tool chain and converting from PE (which is a derisive of COFF) to ELF. On top of that you gradually migrate technology over like DirectX, WinUI in much the same way that Apple sits its higher level layers on a BSD core then eventually get it to a point where it is a ChromeOS alternative but called EdgeOS. The difference in name to denote that it isn’t compatible with Windows as to avoid a repeat of Windows RT.
Then over time Microsoft can build on top of BSD/UNIXv7 a more sophisticated higher level set of frameworks that take parts from Windows 10 which make sense and being them over to EdgeOS with the benefit of fixing those frameworks of their limitations due to concerns in the past regarding backwards compatibility. During this time Microsoft should be bringing their software over so that it is native then get EdgeOS to the point where it is highly optimised for ARM without hauling around legacy and since most applications that businesses use to custom write as being pushed into the cloud the underling operating system within large organisations become less important.
The benefit of such a system, rather than starting to scratch, you’re using time tested code while also enabling the platform to move forward. Getting rid of win32 gets rid of irritating limitations like the MAX_PATH limitation, embracing an open standard like UNIXv7 would open up a huge array of open source software and the embracing of LLVM would open up a large ecosystem that Microsoft could tap into where Microsoft can focus on delivering a top notch IDE, contribute work around debugging to the LLVM and developers get a great experience which will give a clear path forward rather than the current situation of stagnation which is resulting in the rest of the industry suffering as a result.
Regarding what is happening in my life, this week coming up I’m going to pick up the last of the presents for the holidays and the following week I have the whole week off to relax at home, spend time with the family and maybe finally get around to giving the house a good top to bottom clean.
Well, my muffler on my scooter has been getting noisier and noisier, and I could quite work out what had caused it. I had a closer look at the muffler and it appears that there is a crack at the joint between the end part and the pipe that feeds to it. I bought some muffler tape (tested to withstand 2000°F) and I’ve sighted it up using some hose clamps – the net result is that it has been super smooth and super quiet. Long term I’m not sure what I’ll be doing but at the moment it is pretty sturdy:
There is a second part that needs to be repaired which I’ll do tomorrow but with the first repair it cut down on the noise by at least 80% and repairing the second hole will make it more or less what it was like 12 months ago. One of the benefits of removing the noise is being able cruise along listening to music – before it was impossible but now it is possible although I tend to be really cautious when I go right into Wellington – I turn off the music because I want to have 100% concentration on the road given the crazies that are on the road.
Like a lot of Apple enthusiasts I have been following all the product reviews and unboxing followed by deep dives into the Apple Silicon SoC (link) ever since the launch of the first wave of Apple Silicon based Macs in their multi year transition that Apple have planned. For me I’m waiting until next year to see the refreshed iMac and MacBook Pro but what I think is interesting though is (as seen in the linked article) the Apple Silicon SoC design and how it compares to a traditional CPU design particularly when you look at how the SoC is made up of a series of specialised chips rather than it being a monolithic entity. I think this is also a reminder to developers why for years Apple have been talking about the need to make use of the built in technologies that Apple provides – each one of those being highly optimised for the specialised chips that make up it. I think the interesting part will be how Apple will use its SoC in conjunction with its built in webcams to get better results in much the same way that Google have used machine learning in conjunction with a fairly basic camera setup to still produce great photos but without the need for having large amounts of hardware.
As for what I am hoping for, I currently have a 27″ iMac so I’d love to see what they did with the MacBook Pro 16″ and that is take the existing form factor and stretch the screen to the edge but in the case of the iMac to have a near bezel-less design which will give a larger screen but still within the same form factor. I’ve currently got a 15″ MacBook Pro and although I probably don’t need as much power as it provides, I prefer having that power there when I need it in much the same way that although I’m happy to settle on a 512GB SSD I always upgrade to 16GB RAM which has pretty much kept me happy for at least the last 5 years (when I built my mum an Intel NUC micro computer I put in 16GB RAM resulting in a machine that has lasted the distance).
Ubiquiti has released UniFi Dream Machine Firmware 1.8.3 which is the first stable update to UDM – everything is going very well and as a result I’ve moved myself over to the stable branch rather than keeping with the beta branch. Ubiquiti are still working on the 5.x series firmware for my UAP HD AC. I’m still deciding whether I’ll move from Skinny broadband to Spark Broadband – I think I’ll wait till next year to make that decision but I’m happy with using Spark especially given the clarity of calling on VoLTE when compared to the old way of falling back to 3G when making a call.
Rumours of the M1X are doing their rounds regarding the refreshed Apple Silicon based Mac’s. Rumour has it that they’re scheduled to be released in the second half of next year. It’ll be interesting to see how everything turns out but I could imagine Apple moving to DDR5 which would give a huge amount of bandwidth when it comes to the more power GPU that’ll he included with it which will required to run a 5K iMac display (not to mention the 16″ MacBook Pro display with support for an external display). I think the interesting part will be what the next version of macOS will bring given that ability for third party kernel extensions to he loaded will most likely be removed and with that I also wonder whether with the tradition to ARM it also marks the beginning of pushing all of Apple’s drivers into user space so that the kernel is a lot smaller than it is today. I don’t see Apple ever getting to a situation of it being a ‘pure Mach kernel’ but I could imagine the kernel getting to the point where only the most essential code is running in the kernel (for the sake of performance and efficiency) with everything else residing outside.
The interesting part will be how the new Apple Silicon based CPUs will perform particularly when it comes to OpenGL games being recompiled for Apple Silicon. Although Apple announced that OpenGL, OpenCL and friends have been deprecated I wonder whether we’ll see OpenCL support removed (it never really caught on) but OpenGL hanging around a lot longer where bugs will be fixed but no new features will be added (yes, I would love to see OpenGL all the up to OpenGL 4.6 implemented on top of Metal API in much the same way that Metal Kit sits on top of Metal API, but I doubt it’ll happen) but if hangs around then at least games such as Simcity 4 might even make its way over to Apple Silicon. Given that the OpenGL stack is based on LLVM (the change occurred around a decade ago) so it’ll be interesting to see whether the complete control over the hardware, drivers and software produces better results than the current situation of having limited access to drivers and hardware specifications while trying to optimise an OpenGL stack that operate in a consistent manner across Intel, AMD and nVidia based GPUs.