It turns out that printf in the XNU kernel (see osfmk/kern/printf.c) does not implement all the normal conversion specifiers. Specifically, I assumed “%f” for doubles would work. As if kernel programming was not hard enough already.
I removed Dash and will likely avoid apps written by Bogdan Popescu until I feel I can trust him again. I relied on Dash for quickly accessing documentation from multiple projects, all within an ubiquitous, unified, and polished interface. Sadly, I no longer trust the app or its developer and will seek alternative solutions for accessing documentation.
On October 5, Apple removed Dash from the App Store for fraudulent activity. I could not understand why a popular app would be be involved with fraudulent reviews. Even after some of the oddities were explained in a statement by Bogdan, I feel like we lack details that would allow me to trust him.
The tipping point for me in my decision to not trust Dash is that the app is not sandboxed. Although the App Sandbox would hamper Dash in some ways, I would feel much better knowing that Dash could not access my personal private information without my explicit permission. While I do not believe Bogdan is a bad actor, even if my belief was wrong the App Sandbox would limit the damage such a bad actor could inflict.
I want to believe that Bogdan has only the best intentions. Applying correct App Sandbox entitlements to Dash would restore my trust in the app and its developer.
At the WWDC 2016 keynote event, Craig Federighi showed the new “Advanced Computer Vision” capabilities that will be integrated in the Photos app. Facial recognition (previously only part of Photos for
OS X macOS) is now supplemented with more general object and scene recognition. The end result is the ability to not just present a list of photos based on metadata from the time of capture (e.g. date, location), but new ways to automatically organize and search your photos given better understanding of the subjects and context of each shot.
How can these computationally complex algorithms run efficiently and not drain the batter of mobile devices?
Enter the Basic Neural Network Subroutines (BNNS). As part of the Accelerate framework, Apple has added new APIs for efficiently running artificial neural networks! From the reference documentation:
A neural network is a sequence of layers, each layer performing a filter operation on its input and passing the result as input to the next layer. The output of the last layer is an inference drawn from the initial input: for example, the initial input might be an image and the inference might be that it’s an image of a dinosaur.
While the new APIs make it easy to efficiently run a neural network, the training data that is so crucial must be provided by the user of the API.
BNNS supports implementation and operation of neural networks for inference, using input data previously derived from training. BNNS does not do training, however. Its purpose is to provide very high performance inference on already trained neural networks.
It looks like WWDC Session 715 will cover the new neural network acceleration functions.
As for the new face, object, and scene recognition in Photos, where is Apple getting its training data?
Unfortunately, the use of the
com.apple.vm.* entitlements seems to be restricted to Mac App Store apps only. I have not seen any documentation on how to obtain the relevant entitlements.