As an experiment Fractal eXtreme is now just $9.95. The latest version has a few bug fixes and performance improvements, is a free update to all existing users, and is cheap like borscht.

Fractal eXtreme has cost $34.95 for most of its life, and even after inflation this is still more than modern consumers want to pay for what is, ultimately, a toy. So, we’re going to try lowering the price. But you still get the same great features for that price:

- Highly optimized math, updated several times in the last few years, including support for 64-bit processors (4-5x faster than 32-bit), multi-core, etc.
- Fluid interface that smoothly animates in as you explore
- Zoom movies that can be almost arbitrarily long, are calculated extremely efficiently (240 times faster, with higher resolution), and can be played back with variable zoom rates without requiring recalculation
- And so much more…

There are also some new changes in this month’s update.

- The zoom movie player was significantly optimized, especially the 64-bit version. The frame rate is often two to three times higher than before
- A Vertical Sync option was added to the zoom movie player. This option synchronizes

playback to the monitor refresh which can make for more stable playback speeds. This

also stops the zoom movie player from wasting resources trying to play back movies

faster than the monitor can display them.
- A bug in the auto-explorer system that prevented calculations from continuing

after saving an auto-explored image was fixed.
- Some bugs in the bilinear filtering code were fixed. These bugs could lead to crashes or

slightly incorrect results.

Occasionally people will ask me what processor will run Fractal eXtreme (and, by extension, other high-precision math code) the fastest. It was fascinating to run the tests to find out. The results aren’t complete, and they don’t allow for price differences, but they allow you to do the calculations yourself, and run tests to verify the results.

Give it a try. You can download Fractal eXtreme here, and there’s more information about what makes it unique here.

### Like this:

Like Loading...

*Related*

## About brucedawson

I'm a programmer, working for Google, focusing on optimization and reliability. Nothing's more fun than making code run 10x faster. Unless it's eliminating large numbers of bugs.
I also unicycle. And play (ice) hockey. And juggle.

I imagine you wrote Fractal eXtreme when using a graphics card GPU for calculation was harder than it is now.

With DirectCompute and OpenCL and language support for same, do you have any plans for a version of Fractal eXtreme that does its high-precision-floating-point-using-integer-arithmetic on a GPU?

I don’t have any plans for using the GPU. My understanding is that integer math support on GPUs is weak. High-precision math requires having an n*n multiply that gives a 2n result, and integer add instructions that take and produce a carry bit. I’m not aware of GPUs that have these capabilities. And, unless a GPU could do a 64×64 multiply it would be starting at a disadvantage relative to a 64-bit CPU.

Larrabee would have been great for high-precision zooming, but I’m skeptical about other GPUs.

I guess its down to how good your graphics card(s) are. Some monster Triple-SLI GT6xx could have thousands of cores. Brute force parallelism makes up for a lot of inefficiency.

I take Bruce’s point about lack of carry instructions, there’s certainly ways around that, but are slower than a true carry facility.

http://www.bealto.com/mp-mandelbrot_fp128-opencl.html

My own graphics card is a 9800GT, it will only run Eric Bainvilles mandelbrot program as OpenCl GPU float. I’m guessing that means 32-bit integers. To deal with carries you would have to deal with the high and low 16 bits separately as a 16bit multiplication could create a 32 bit result. So to do the equivalent of a SSE2 64bit integer would take 10 16 bit multiplications plus AND masking, bit shifts and carry additions. At least 40 or so operations, divided by the 112 cores gives 2.8 64 bits arithmetics a cycle and the clock speed is 1.35Ghz giving maybe 3.78 billion 64 bit mults a second. compared to 3.2Ghz pentiumD with two cores each doing 2 64 bit multiplications gives 12.8 billion 64bit mults from the processor.

I suspect Bruce has his eye on the Knights corner Xeon-Phi. Sadly I just cant see that as becoming an enthusiast level device.

http://www.extremetech.com/extreme/133541-intels-64-core-champion-in-depth-on-xeon-phi

I would assume that a ‘serious’ fractal enthusiast would have a six-core machine. Sandybridge can do a bit over one billion load/mul/add/adc/adc sequences per second (one every three cycles, assume 3.0 GHz). See my measurement article for more math, but extrapolating it to the GPU is left as an exercise for the reader:

https://randomascii.wordpress.com/2012/03/28/fractal-and-crypto-performance/

My hopes are dashed by the blog link you posted, Jeremy.

My reading of Eric Bainville’s GPU benchmarks is that a high-end GPU might just about match a high-end CPU for high-precision floating point, but far more likely will fall short.

Pingback: Windows Slowdown, Investigated and Identified | Random ASCII

what will happen with Fractal Extreme? Can we expect any major updates in the future?

I plan to continue maintaining Fractal eXtreme, but I can’t promise major updates. We will see.

Hi,

I landed on this page, can’t see a link to actually buy/download FE.

Well that’s foolish of me to not make that obvious. You can download FX from here:

http://www.cygnus-software.com/downloads/downloads.htm

When you run it you will be given a chance to try it or buy it. Let me know if it doesn’t work. For more information see various links from here:

http://www.cygnus-software.com/downloads/

Is there a way to utilize this software to manipulate a photo?

No. Fractal eXtreme generates images based on a set of formulae but it doesn’t load or manipulate other images.

Fractal eXtreme is a great program – very fast, easy to use and good value!

Hey Bruce,

Only major update I’d like to see a way to use more than one PC to split up the rendering of really long movies. I have a 4×2 core opteron server and 2 quad core desktops doing nothing right now and I would love to be able to put them to task on some of my really long zoom movies. Thanks for all the work you’ve been doing on this over the years.

Yep, multi-machine rendering is on my wishlist also. It would allow deep zooming to pushed to even more ridiculous limits. Maybe some day…

Pingback: The Surprising Subtleties of Zeroing a Register | Random ASCII

On the subject of GPU computing. Is there some advantage the GPUs have when rendering 3D fractals relative to 2D? I’ve been using 3D fractal software in OpenCL on an HD7970 and it totally blew me away. However i suspect there’s limitations in accuracy or zoom depth….

GPUs are extremely fast, and when you throw in some 3D projections their advantage gets slightly greater. However their weak spot is always going to be precision. Writing high-precision code for a GPU is challenging and inefficient. So a GPU fractal program might look great at first, but once you zoom in it will either stop, lose quality, or slow down dramatically.

Here’s one data point for the new price; when I couldn’t find my old registration code I just bought another.

My wishlist item: exporting a zoom movie smaller than it was rendered, with anti-aliasing on the fly.

That is a nice advantage of the new price.

You can export a zoom movie smaller than it was rendered (I usually export larger) and there will be some bilinear filtering, but there isn’t any antialiasing beyond that. It’s not a bad idea, although I still recommend rendering at a relatively low resolution with antialiasing and then rendering at a higher resolution. For instance, render at 640×480 with antialiasing and then save the movie at 1280×960.

Ah yes, so you can. I was stuck on the size options in the menu and hadn’t tried simply resizing the window.

I do enjoy the zoom movies made the way you suggest, as long as I manage to keep my eyes near the middle. My eyes don’t want to do that, though; they want to follow the features I’m zooming past as they move to the edge, and keep expecting to see detail resolve as these features reach their largest size.

Displaying smaller than rendered seems to mostly solve these issues without antialiasing anyway, now that I’ve tried it.

As long as the original movie is rendered antialiased then scaling it down should work quite well. That is, scaling it down won’t make aliased source material magically beautiful, but it will prevent blurring from upscaling, which is what you want. Which is great.

When is FX going to support perturbation and series approximation?

http://en.wikipedia.org/wiki/Mandelbrot_set#Perturbation_theory_and_series_approximation

I’m afraid I’m not working on FX very much anymore so I don’t know when I might get to this. It’s definitely a cool concept, but tricky to integrate into FX’s design.