A14X Bionic allegedly benchmarked days before Apple Silicon Mac event | AppleInsider
“The single-core benchmark for the "A14X" scored 1634.....
The multi-core benchmark for the "A14X" scored 7220....”
Esto es bueno o malo?
Teniendo en cuenta que un benchmark me aporta nada, sinceramente.
Pues a mi me dice mucho.
@Tim , hay benchmarks y benchmarks. Los hay que no sirven para mucho y que solo miden la fuerza bruta del procesador, pero hay otros como Geekbench, o mejor aún, los tests de PassMark, que simulan distintas cargas de trabajo reales para analizar cómo se comportaría un equipo en determinada circunstancia.
En el caso de Geekbench, se realizan todos estos, para los tests de CPU y de Compute:
CPU Workloads
Cryptography Workloads
AES-XTS
The Advanced Encryption Standard (AES) defines a symmetric block encryption
algorithm. AES encryption is widely used to secure communication channels (e.g.,
HTTPS) and to secure information (e.g., storage encryption, device encryption).
The AES-XTS workload in Geekbench 5 encrypts a 128MB buffer using AES running in
XTS mode with a 256-bit key. The buffer is divided into 4K blocks. For each block, the
workload derives an XTS counter using the SHA-1 hash of the block number. The block is
then processed in 16-byte chunks using AES-XTS, which involves one AES encryption,
two XOR operations, and a GF(2128) multiplication.
Geekbench will use AES (including VAES) and SHA-1 instructions when available, and fall
back to software implementations otherwise.
Superior AES performance can translate into improved usability for mobile devices. See,
e.g., the Ars Technica review of the Moto E.
Integer Workloads
Text Compression
The Text Compression workload uses LZMA to compress and decompress an HTML
ebook. LZMA (Lempel-Ziv-Markov chain algorithm) is a lossless compression algorithm.
The algorithm uses a dictionary compression scheme (the dictionary size is variable and
can be as large as 4GB). LZMA features a high compression ratio (higher than bzip2).
The Text Compression workload compresses and decompresses a 2399KB HTML ebook
using the LZMA compression algorithm with a dictionary size of 2048KB. The workload
uses the LZMA SDK for the implementation of the core LZMA algorithm.
Image Compression
The Image Compression workload compresses and decompresses a photograph using
the JPEG lossy image compression algorithm, and compresses and decompresses a CSS
sprite using the PNG lossless image compression algorithm.
The photograph is a 24 megapixel image, and the JPEG quality parameter is set to “90”, a
commonly-used setting for users who desire high-quality image. JPEG compression is
implemented by the libjpeg-turbo library.
The CSS sprite is a 3 megapixel image. PNG compression is implemented by the libpng
and zlib-ng libraries.
The Image Compression workload compresses and decompresses a photograph using
JPEG, and a CSS sprite using PNG. The workload sets the JPEG quality parameter to
“90”, a commonly-used setting for users who desire high-quality images.
The workload uses libjpeg-turbo for the implementation of the core JPEG algorithm, and
libpng for the implementation of the core PNG algorithm.
Navigation
The Navigation workload computes driving directions between a sequence of destinations
using Dijkstra's algorithm. Similar techniques are used to compute paths in games, to
route computer network traffic, and to route driving directions. The dataset contains
216,548 nodes and 450,277 edges with weights approximating travel time along the road
represented by the edge. The route includes 13 destinations. The dataset is based on
Open Street Map data for Ontario, Canada.
HTML5
The HTML5 workload models DOM creation from both server-side rendered (SSR) and
client-side rendered (CSR) HTML5 documents. For the SSR document, the HTML5
workload uses the Gumbo HTML5 parser to create the DOM by parsing an HTML file. For
the CSR document, the HTML5 workload uses the Gumbo HTML5 parser to create the
DOM by parsing an HTML file, then uses the Duktape JavaScript engine to extend the DOM.
SQLite
SQLite is a self-contained SQL database engine, and is the most widely deployed
database engine in the world.
The SQLite workload executes SQL queries against an in-memory database. The
database is synthetically created to mimic financial data, and is generated using
techniques outlined in “Quickly Generating Billion-Record Synthetic Databases” by J. Gray
et al. The workload is designed to stress the underlying engine using a variety of SQL
features (such as primary and foreign keys) and query keywords such as: SELECT,
COUNT, SUM, WHERE, GROUP BY, JOIN, INSERT, DISTINCT, and ORDER BY. This
workload measures the transaction rate a device can sustain with an in-memory SQL
database.
PDF Rendering
The Portable Document Format (PDF) is a standard file format used to present and
exchange documents independent of software or hardware. PDF files are used in
numerous ways, from government documents and forms to e-books.
The PDF workload parses and renders a PDF map of Crater Lake National Park at
200dpi. The PDF workload uses the PDFium library (which is used by Google Chrome to
display PDFs).
Text Rendering
The Text Rendering workload parses a Markdown-formatted document and renders it as
rich text to a bitmap. The Text Rendering workload uses the following libraries as part of
the workload:
• GitHub Flavored Markdown, used to parse the Markdown document.
• FreeType, used to render fonts.
• ICU (International Components for Unicode), used for boundary analysis.
The Text Rendering workload input file is 1721 words long, and produces a bitmap that is
1275 pixels by 9878 pixels in size.
Clang
Clang is a compiler front end for the programming languages C, C++, Objective-C,
Objective-C++, OpenMP, OpenCL, and CUDA. It uses LLVM as its back end.
The Clang workload compiles a 1,094 line C source file (of which 729 lines are code). The
workload uses AArch64 as the target architecture for code generation.
Camera
The Camera workload simulates some of the actions a camera application or photosharing
social network application might perform. The Camera workload simulates
applying a filter to an image and preparing it for upload:
• Crop an image to a square aspect ratio.
• Load a filter definition from a JSON file and execute the individual filter operations:
• Adjust image contrast.
• Blur the image.
• Composite a vignette effect onto the image.
• Composite a border onto the image.
• Compress the output image into a JPEG file.
• Compute the SHA-2 hash of the JPEG file.
The Camera workload also simulates preparing photos for display in a UI picker by
generating thumbnails for image:
• Query a SQLite database to determine which images are missing thumbnails.
• Generate a thumbnail with a longest edge of 224 pixels.
Floating Point Workloads
N-Body Physics
The N-Body Physics workload computes a 3D gravitation simulation using the Barnes-Hut
method. To compute the exact gravitational force acting on a particular body x in a field of
N bodies requires N − 1 force computations. The Barnes-Hut method reduces the number
of force computations by approximating as a single body any tight cluster of bodies that is
far away from x. It does this efficiently by dividing the space into octants — eight cubes of
equal size — and recursively subdividing each octant into octants, forming a tree, until
each leaf octant contains exactly one body. This recursive subdivision of the space
requires floating point operations and non-contiguous memory accesses.
The N-Body Physics workload operates on 16,384 bodies arranged in a “flat” galaxy with a
massive black hole at its centre.
Rigid Body Physics
The Rigid Body Physics workload computes a 2D physics simulation for rigid bodies that
includes collisions and friction. The workload uses the Lua programming language to
initialize and manage the physics simulation, and uses the Box2D physics library to
perform the actual physics calculations.
Gaussian Blur
The Gaussian Blur workload blurs an image using a Gaussian spatial filter. Gaussian
blurs are widely used in software, both in operating systems to provide interface effects,
and in image editing software to reduce detail and noise in an image. Gaussian blurs are
also used in computer vision applications to enhance image structures at different scales.
The Gaussian Blur workload blurs an 24 megapixel image using a Gaussian spatial filter.
While the Gaussian blur implementation supports an arbitrary sigma, the workload uses a
fixed sigma of 3.0f. This sigma translates into a filter diameter of 25 pixels by 25 pixels.
Face Detection
Face detection is a computer vision technique that identifies human faces in digital
images. One application of face detection is in photography, where camera applications
use face detection for autofocus.
The Face Detection workload uses the algorithm presented in “Rapid Object Detection
using a Boosted Cascade of Simple Features” (2001) by Viola and Jones. The algorithm
can produce multiple boxes for each face. These boxes are reduced to a single box using
non-maximum suppression.
Horizon Detection
The Horizon Detection workload searches for the horizon line in an image. If the horizon
line is found, the workload rotates the image to make the horizon line level.
The workload first applies a Canny edge detector to the image to reduce details, then
detects lines in the image using the Hough transform, and then picks the line with the
maximum score as the horizon. The workload rotates the image so the horizon line is
level in the image.
The input image is a 9 megapixel photograph.
Image Inpainting
The Image Inpainting workload takes an input image with an undesirable region (indicated
via a mask image) and uses an inpainting scheme to reconstruct the region using data
from outside the undesirable region.
The Image Inpainting workload operates on 1 megapixel images.
HDR (esta suele fallarme en multi-core en mi Sagar Hozkatua)
The HDR workload takes four standard dynamic range (SDR) images and produces a
high dynamic range (HDR) image. Each input image is 3 megapixels in size. The HDR
workload uses the algorithm described in the paper, "Dynamic Range Reduction inspired
by Photoreceptor Physiology" by Reinhard and Devlin, and produces superior images as
compared to the tone mapping algorithm in Geekbench 4.
Ray Tracing
Ray tracing is a rendering technique. Ray tracing generates an image by tracing the path
of light through an image plane and simulating the effects of its encounters with virtual
objects. This method is capable of producing high-quality images, but these images come
at a high computational cost.
The Ray Tracing workload uses a k-d tree, a space-partitioning data structure, to
accelerate the ray intersection calculations.
The Ray Tracing workload operates on a scene with 3,608 textured triangles. The
rendered image is 768 pixels by 768 pixels.
Structure from Motion
Augmented reality (AR) systems add computer-generated graphics to real-world scenes.
The systems must have an understanding of the geometry of the real-world scene in order
to properly integrate the computer-generated graphics. One approach to calculating the
geometry is through Structure from Motion (SfM) algorithms.
The Structure from Motion workload takes two 2D images of the same scene and
constructs an estimate of the 3D coordinates of the points visible in both images.
Speech Recognition
The Speech Recognition workload performs recognition of arbitrary English speech using
PocketSphinx, a widely used library that uses HMM (Hidden Markov Models).
Using speech to interact with smartphones is becoming more popular with the introduction
of Siri, Google Assistant, and Cortana, and this workload tests how quickly a device can
process sound and recognize the words that are being spoken.
Machine Learning (esta suele fallarme en multi-core en mi Sagar Hozkatua)
The Machine Learning workload is an inference workload that executes a Convolutional
Neural Network to perform an image classification task. The workload uses MobileNet v1
with an alpha of 1.0 and an input image size of 224 pixels by 224 pixels. The model was
trained on the ImageNet dataset.
Compute Workloads
Sobel
The Sobel operator is used in image processing and computer vision for finding edges in
images.
The Sobel workload converts an RGB image to greyscale and computes the Sobel
operator for the greyscale image. The operator uses two integer convolutions (one for
horizontal edges, and one for vertical edges) to compute the final image.
Canny
Like the Sobel operator, the Canny algorithm is an edge detector. However, Canny is
significantly more complex as it uses a multi-stage algorithm to find edges in images:
1. Apply a Gaussian Blur to the image to remove noise.
2. Calculate the gradients of the image.
3. Apply non-maximum suppression to remove spurious edges
4. Finalize edges by removing uncertain edges that are not connected to certain edges.
Stereo Matching
The Stereo Matching workload constructs a depth map from a pair of images taken from
the same scene.
The Stereo Matching workload uses a block matching algorithm to find each pixel's
disparity and generate the map. This algorithm matches each block of pixels in one image,
to the closest block in the second image, where a Sum of Absolute Differences (SAD) is
used as a metric for how close two blocks of pixels are from one another.
Histogram Equalization
Histogram equalization is a method in image processing of contrast adjustment using the
image's histogram. The Histogram Equalization workload performs this adjustment on a
2576 × 3872 image.
Gaussian Blur
The Gaussian Blur workload blurs an image using a Gaussian spatial filter. Gaussian blurs
are widely used in software—both in operating systems to provide interface effects, and in
image editing software to reduce detail and noise in an image. Gaussian blurs are also
used in computer vision applications to enhance image structures at different scales.
The Gaussian Blur workload blurs an image using a Gaussian spatial filter. While the
workload’s implementation supports an arbitrary sigma, the workload uses a fixed sigma
of 3.0f. This sigma translates into a filter diameter of 25 pixels by 25 pixels.
Depth of Field
The Depth of Field workload computes a lens blur “depth of field” image given two images
— a depth image and a colour image. It accomplishes this by applying a blur to each pixel
of the colour image that is proportional to the distance of the pixel from the focal point (as
determined by the depth image).
Face Detection
Face detection is a computer vision technique that identifies human faces in digital
images. One application of face detection is in photography, where camera applications
use face detection for autofocus.
The Face Detection workload uses the algorithm presented in “Rapid Object Detection
using a Boosted Cascade of Simple Features” (2001) by Viola and Jones. The algorithm
can produce multiple boxes for each face. These boxes are reduced to a single box using
non-maximum suppression.
Horizon Detection
The Horizon Detection workload searches for the horizon line in an image. If the horizon
line is found, the workload rotates the image to make the horizon line level.
The workload first applies a Canny edge detector to the image to reduce details, then
detects lines in the image using the Hough transform, and finally picks the line with the
maximum score as the horizon.
Feature Matching
The Feature Matching workload finds a match of keypoints (or features) between two
images using the ORB algorithm. ORB is used by other algorithms (such as Structure
from Motion) to find the matching keypoints across the two images that are then used to
generate the 3D map of those points.
Particle Physics
Particle physics simulations are used for many applications including the simulation of
fluids and smoke for games.
The Particle Physics workload implements a simulation where particles interact with one
another and their environment via elastic collisions. Other particle-particle forces are
ignored. The Particle Physics workload uses 4,096 particles in its simulation.
SFFT
FFT (Fast Fourier Transform) decomposes an input signal into a linear combination of a
basis of trigonometric polynomials. FFT is a core algorithm in many signal-processing
applications.
The FFT workload executes an FFT on a 32MB input buffer operating in 1KB chunks. This
is similar to how FFT is used to perform frequency analysis in an audio processing
application.