Understanding julia Allocations

When considering the allocations needed for a few julia functions I was kind of surprised of my original findings. First of all, @time and @allocated very often don’t yield the same result. Which one is more reliable? That’s an easy one once you find out that @allocated actually wraps the function¬† to measure into another function, so as to avoid any side effects from the runtime. So this is definitely the more reliable one of the two. You can however get the same result from @time by a simple construct like

The allocated memory is measured by actually requesting the overall memory allocated before and after the run of the function and taking the difference (This also means that compilation time on first call is also counted in). If you want more info about what objects actually survive in memory try whos().

To see how memory counts come about let’s look at some simple examples. First we define an identity operation and a few simple types:

Now although the llvm IR code is absolutely the same for the identity on a position no matter how it was defined (and doesn’t contain any alloca statements at all)

we can see that the resulting allocation overhead is different for the two calls

So why is that? Because Position is an immutable, so line.startpt has to create a new Position object. If Position were a “type” instead, it would be the first call to allocate 32 bytes and the second one to allocate 0 bytes – think about that!

And why 32 bytes? Because a Position contains three Float64 values. A Float64 as its name suggests is defined by 64 bits = 8 bytes (commonly termed an LWORD) and 3×8 = 32 … no wait, something’s missing: Julia always takes care to align its variables on multiples of 16 bytes. So the memory needed is rounded up to the next multiple of 16 which is 32 bytes. If our Position had been a 6 dimensional object (containing 6 Float64 values), the allocation had required 8×6=48 rounded up 64 bytes.