Speed and Safety | The C++ Alliance
Speed and Safety | The C++ Alliance
Speed and Safety
Matt Borland
· Apr 6, 2026
In my
last post
I mentioned that
int128
library would be getting CUDA support in the future.
The good news is that the future is now!
Nearly all the functions in the library are available on both host and device.
Any function that has
BOOST_INT128_HOST_DEVICE
in its signature in the
documentation
is available for usage.
An example
of how to use the types in the CUDA kernels has been added as well.
These can be as simple as:
using test_type = boost::int128::uint128_t;

__global__ void cuda_mul(const test_type* in1, const test_type* in2, test_type* out, int num_elements)
int i = blockDim.x * blockIdx.x + threadIdx.x;

if (i < num_elements)
out[i] = in1[i] * in2[i];
Other Boost libraries are or will be beneficiaries of this effort as well.
First, Boost.Charconv now supports
boost::charconv::from_chars
and
boost::charconv::to_chars
for integers being run on device.
This can give you up to an order of magnitude improvement in performance.
These results and benchmarks are available in the
Boost.Charconv documentation
Next, in the coming months Boost.Decimal will gain CUDA support as part of this effort.
We think users will benefit greatly from being able to perform massively parallel parsing, serialization, and calculations on decimal numbers.
Stay tuned for this likely in Boost 1.92.
In the meantime, enjoy the initial release of Decimal coming in Boost 1.91!
On the other side of the performance that we’re looking to deliver in coming versions of Boost, we must not forget the importance of safety.
There exist plenty of
examples of damage and death
caused by arithmetic errors in computer programs.
Can we create a library that provides guaranteed safety in arithmetic while minimizing performance losses and integration friction?
How does one guarantee the behavior of their types?
In our implementation,
Boost.Safe_Numbers
, we are investigating the usage of the
Why3
platform for deductive program verification.
By pursuing these formal methods, safety can have real meaning.
We will continue to provide additional details as part of the
formal verification page
of our documentation.
Since inevitably the library will cause an increase in the number of errors (which is a good thing), we aim to fail as early as possible, and when we do provide the most helpful error message that we can.
For example, we have some static arithmetic errors reported in as few as three lines:
clang-darwin.compile.c++ ../../../bin.v2/libs/safe_numbers/test/compile_fail_basic_usage_constexpr.test/clang-darwin-21/debug/arm_64/cxxstd-20-iso/threading-multi/visibility-hidden/compile_fail_basic_usage_constexpr.o
../examples/compile_fail_basic_usage_constexpr.cpp:18:22: error: constexpr variable 'z' must be initialized by a constant expression
18 | constexpr u8 z {x + y};
| ^ ~~~~~~~
../../../boost/safe_numbers/detail/unsigned_integer_basis.hpp:397:17: note: subexpression not valid in a constant expression
397 | throw std::overflow_error("Overflow detected in u8 addition");
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../examples/compile_fail_basic_usage_constexpr.cpp:18:25: note: in call to 'operator+({255}, {2})'
18 | constexpr u8 z {x + y};
| ^~~~~
1 error generated.
Our runtime error reporting system fundamentally uses Boost.Throw_Exception so it can report not only the type, operation, file and line, but also up to an entire stack trace when leveraging the optional linking with Boost.Stacktrace.
Not to forget our discussion of CUDA so quickly, the Safe_Numbers library will have CUDA support.
One thing that we will continue to refine is synchronizing error reporting on device as one cannot throw an exception on device.
We are always looking for users of all the libraries discussed.
If you are a current or prospective user, feel free to reach out and let us know what you’re using it for, or any issues that you find.
All Posts by This Author
04/06/2026
Speed and Safety
01/15/2026
Decimal is Accepted and Next Steps
10/06/2025
Decimal Goes Back to Review
07/14/2025
Bigger, Faster, Stronger Types
04/14/2025
Looking at the Numbers
01/10/2025
Another new library underway
10/21/2024
CUDA comes to Math
07/08/2024
Matt's Q2 2024 Update
04/22/2024
Matt's Q1 2024 Update
01/10/2024
Matt's Q4 2023 Update
10/27/2023
Matt's Q3 2023 Update
View All Posts...