More speed Captain, more speed!

admirable

Suspended / Banned
Messages
11,612
Name
Jim
Edit My Images
Yes
How do I get round this MAX OUT situation?



nuOwGnl.jpg
 
What's causing it?
 
GoPro editing
 
What's the CPU? And what does "GoPro editing" actually mean... what are you doing when this is happening, and with which software?
 
How do I get round this MAX OUT situation?
You need a much faster CPU. Video editing is always going to be CPU intensive.
 
I use the GoPro Studio software that comes with the GoPro.

CPU's max out when it is converting the final stage for youtube etc.

I have two Intel Xenon E5520 2.26 GHz processors fitted + 24GB ram
 
I use the GoPro Studio software that comes with the GoPro.

CPU's max out when it is converting the final stage for youtube etc.

I have two Intel Xenon E5520 2.26 GHz processors fitted + 24GB ram
The E5520 Xeon, like all the 55xx series, was as I recall a quad core processor with hyperthreading, so just one of them normally shows as eight cores in task manager (my i7 desktop does, for example and that's a quad core processor with hyperthreading. My workstation with two E5420s shows as 8 cores, as those are quad core without hyperthreading).

There are faster pin compatible processors, but it looks like you're doing something that is CPU limited, so don't expect the percentages to go down with a faster processor, just the time taken to complete the action.
 
you could bin the twin xeons and get an i7 in there. those E5520 are old and slow now in comparison, even in dual form.

in comparison a 4th gen i7 benches at 10k and a single E5520 4.5k.

that said, the software will most likely use 100% CPU while doing the encode anyway. the speed of the processor will just determine how long it stays at 100%.
 
The E5520 Xeon, like all the 55xx series, was as I recall a quad core processor with hyperthreading, so just one of them normally shows as eight cores in task manager (my i7 desktop does, for example and that's a quad core processor with hyperthreading. My workstation with two E5420s shows as 8 cores, as those are quad core without hyperthreading).

There are faster pin compatible processors, but it looks like you're doing something that is CPU limited, so don't expect the percentages to go down with a faster processor, just the time taken to complete the action.

that is a good shout actually, the E5520 is 4 core (2 logical cores per physical). does windows definitely show both CPU? you should see them in system properties.
 
If they don't both show, my bet is the BIOS has disabled hyperthreading and the second CPU. You may well be using a quarter of what you think you are using....
 
I did have HT disabled in the bios but have enabled it again, will do a test later. I can get all 16 cores showing in task manager

A pair of i7's would be nice from Santa :)
 
im not sure anyone makes a dual 1150 socket board.

e: google say no. 2011 socket (xeon again) yes. a fast dual xeon setup worth bothering about over a new gen i7 is going to cost a few thousands.
 
Last edited:
Ah, what are the fastest CPU's I can do a straight swap with..............or am I as well binning that idea and get a new PC?
 
depends what your board will handle, socket 1366 (which the E5520 are) Xeons arent cheap though. even the E5520 are £300 each.

e: if it were me id look at selling the xeons and motherboards and funding a i7-5820K and motherboard and memory as thatll likely not be compatible (i7-5820K benches at 12975, dual E5520 benches at 7589 by the way)
 
Last edited:
If it was me I would be tempted wit something that like an i7 5820K and overclock it to 4Ghz with a Corsair H100i to keep it cool. Stick plenty of DDR3 in and your away really.
 
How long is it actually taking to encode your video?
Core iX processors are never multi-socketed. That capability is reserved for Xeon processors.

Depending on your Mobo/BIOS you could swap your XEONs for a pair of these:
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+W5590+@+3.33GHz

They are almost 1.5x as powerful and can probably be picked up on eBay for reasonable ££.
However, if you want to buy an i7 and sell your own rig, it would probably make an excellent hypervisor for my lab...
 
A 5min clip (320Mb) takes less than 5 mins

My 'Windows Experience Index' is 7.2 so I'll just live with that for a while yet
 
A cheap option would be to do what I did and buy a 2nd hand Xeon E3-1230 v2 CPU and Z77 motherboard for £130 which benches at 12k. Not bad for the money at all.
 
depends what your board will handle, socket 1366 (which the E5520 are) Xeons arent cheap though. even the E5520 are £300 each.

e: if it were me id look at selling the xeons and motherboards and funding a i7-5820K and motherboard and memory as thatll likely not be compatible (i7-5820K benches at 12975, dual E5520 benches at 7589 by the way)

How would the Xeons compare to an i7 at something like running ESXi with multiple simultaneous guest OS? My dual E5420 machine has been sacrificied to be an ESXi host which it seems to handle admirably (albeit the RAM is bleedin' expensive) - we have three virtualised instances of Server 2012 Essentials plus various linux VMs all chugging along quite happily at the moment and I have a number of other Windows OS images that I fire up from time to time for compatibility testing - but since my i7 (a 2600K) is my desktop at home I'm not really prepared to bung another ESXi install onto it just to run a comparison.

If the "desktop" i7 processors will actually perform this role better than a pair of Xeons from a few of years back and can use much cheaper (albeit not ECC) RAM then a rethink is in order.
 
That's a very open ended question. ID still choose the xeons and eec for business servers. If they're test machines/low usage machines/home machines then not as important. My home microserver has a very poor low power AMD unit within and it still runs the server 2008 host with 2 2008 vm, an xp vm and a Unix based firewall vm. BUT they're all low throughout and have a max of 3 simultaneous users.

So short answer is - depends (on use) :)
 
Pretty much what Neil said, except to add it depends on the variety tasks the VMs are supporting. If they are a nice blend in terms of CPU/Disk/Memory intensitivity then you can get away with more.
The main benefit of running Xeon processors is the multi-socket architecture and that each processor can separately address it's own bank of memory. My i7/mobo combination will max out at 32 Gig of RAM. The multi-socketed hypervisors we are using at work are currently sporting 96GB.
 
If the "desktop" i7 processors will actually perform this role better than a pair of Xeons from a few of years back and can use much cheaper (albeit not ECC) RAM then a rethink is in order.
I run an i5 as an ESXi server. Running a virtualised Linux system, it benches comparably with running the sdame software on an i7 (when you factor clock speed/no processors into the equation). The only thing Xeons have over the i7 is typically more cache (2-3x) and support for properly virtualising hardware (aka pass-through or VT-d) which the -K processors didn't have on (at least) the -2xxx or -3xxx lines.

Personally, on a clock-for-clock basis, I think you would be hard pressed to tell the difference between a VM on a Xeon and one on an i7 - as long as you aren't hitting a point where what you are trying to do fits completely in the cache (or memory) on a Xeon server but doesn't on an i7.
 
Well, I think Neil et al have given better computer advice than I possibly could, but maybe you need a new kettle. That way you could leave the screen and brew a nice cup of tea in the time it takes to render down a short video clip. ;)

If you had another pc attached to the display via a T-switch, you could set off your video rendering on machine A and the switch to machine B to carry on doing other stuff while the rendering chunks away in the background.
 
I often use VMs created by Virtual box to convert my DVDs to X-Vid.

In my particular setup I often have 8 VMs running , each one converting a different episode of a Series or maybe 8 different films.

However each VM is a stripped down version of XP running in just 1Gb space and using 1Gb RAM with the results being output to an external SSD.

It takes approx 1 1/2hrs to 2 1/2 hrs to convert eight 40min episodes to X-Vid at 640x360.

Once set up they're a piece of cake to run.
.
 
Last edited:
Well, I think Neil et al have given better computer advice than I possibly could, but maybe you need a new kettle. That way you could leave the screen and brew a nice cup of tea in the time it takes to render down a short video clip. ;)

If you had another pc attached to the display via a T-switch, you could set off your video rendering on machine A and the switch to machine B to carry on doing other stuff while the rendering chunks away in the background.

Almost exactly what I do - the i7 does the rendering and my slower PC is used for surfing the 'net etc.

Both are connected to my 40" LED TV via a KVM switch and a wireless K/B and mouse which swaps them over and the usual zapper changes between the TV channels using HDMI graphics cards.

Or if not having a cup of tea I can simply go to the USB channel on my TV and choose from a large variety of films on a 500 GB HDD plugged into the TV.
 
I often use VMs created by Virtual box to convert my DVDs to X-Vid.

In my particular setup I often have 8 VMs running , each one converting a different episode of a Series or maybe 8 different films.

However each VM is a stripped down version of XP running in just 1Gb space and using 1Gb RAM with the results being output to an external SSD.

It takes approx 1 1/2hrs or so to convert eight 40min episodes to X-Vid at 640x360.

Once set up they're a piece of cake to run.
.

Seems a bit extreme, why duplicate the overhead of 8 OS instances when all you want to do is multitask?
 
'Because I can' is a perfectly acceptable answer when it comes to such things of course :)
 
Seems a bit extreme, why duplicate the overhead of 8 OS instances when all you want to do is multitask?

Thought that was what I was doing.

Have you ever tried to render 8 40 min episodes of a TV series in one go?

This really is the easiest way to do it.

Because each one is running in their own space there is no interaction between them, no crossover problems or anything.
.
 
Last edited:
Thought that was what I was doing
.
You are duplicating the overhead of 8 OS' going through a virtualising server. I'm 99.9999% sure they would render quicker in a single OS with each one given their own directory to work on. But then your setup is up to you ;)
 
PS. Yes, I have recoded more than one thing at once under a single OS.... but then I use a command line processor which is easy to run multiple times. Not sure about the software you are using....
 
You are duplicating the overhead of 8 OS' going through a virtualising server. I'm 99.9999% sure they would render quicker in a single OS with each one given their own directory to work on. But then your setup is up to you ;)

I've tried that before but it is much easier this way - at least for me and the way I do it and they are, in effect, rendering much faster than real time even using two pass rendering - eight 40 min episodes is equal to about 5 hrs 20 mins rendering - all rendered in 120 mins or so - about 2 1/2 times real time rendering.

PS. Yes, I have recoded more than one thing at once under a single OS.... but then I use a command line processor which is easy to run multiple times. Not sure about the software you are using....

'Fraid I've never got on with command line anything - so this is the easiest for me - once set up about 6 clicks in each VM starts the rendering.

EDIT: I'm using Virtual Box free from Oracle.
.
 
Last edited:
I still think it would be quicker in one machine.
 
ta for all to the replies to my way off the original topic question...

Personally, on a clock-for-clock basis, I think you would be hard pressed to tell the difference between a VM on a Xeon and one on an i7 - as long as you aren't hitting a point where what you are trying to do fits completely in the cache (or memory) on a Xeon server but doesn't on an i7.

I've been doing some analysis of the loads on the esxi server this morning (I should have been programming, but never mind) and the roadblock we are likely to run into is the memory limit on i7 motherboards mentioned by afosoas (I haven't checked all of them, but it seems a common theme in the specs of the ones I've looked at, although some of the i7 CPUs seemingly can address 64GB) and the other thing that's a concern is drive head contention (probably not the right phrasing) as all the VM images are on a single (two in RAID1) SATA drive - I've noticed this already tbh. Despite the workstation chassis of the machine we're using being quite large, it only takes 3x3.5" drives so even fitting a RAID5 controller is not going to give that many options. We are barely using 10-20% of the processing power of its two E5420s most of the time according to the vsphere client resource monitoring, so the one thing we are not is massively CPU bound.

An R710 or something like will be in order soon then :rolleyes: for room to grow, because the demands on the esxi host are not going to diminish. Not because I want a new toy to play with, not at all :D. All my fault in the first place for saying that we needed to get involved with virtualisation :exit:(and ultimately just doing it of my own accord), as our customers kept asking about compatibility of our products and various hypervisors.
 
Back
Top