Announcement

Collapse
No announcement yet.

Edius X and AMD 5950X and 5900X CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Edius X and AMD 5950X and 5900X CPUs

    Anyone using the 5900x or 5950x with Edius X?

    Aside from H.265/264, when using codecs that are CPU decoded such as HQX or any codecs that utilise GPU such as PR/RAW etc. Basically any codec that doesn’t utilise QS via an Intel CPU. What is the timeline performance like in Edius X?

    Compared to a typical Intel setup does Edius X utilise the processing power of the new AMD CPUs as far as FX etc. are concerned? How does Edius X utilise all the cores/threads of these processors, especially the 16/32 cores/threads of the 5950X?

    I’ve had a 5950X on back order now for ages so am still unable to test with Edius X. I’ll be pairing this CPU with an RTX 3080 or 3090 GPU as my main post system. While the system will be for a bunch of post things, Edius is by far my most used application and it would be great if it could utilise this new system.

    I appreciate that the GPU abilities of this system will be quite redundant with Edius, aside from NVENc and the odd filter but I’m quite hopeful that Edius X’s basic edit and timeline render functions would benefit from the AMD CPU with appropriate codecs.

    Probably like many other Edius users, it’s getting quite depressing looking at building a new Intel system. Even though Intel’s Gen11 desktop CPUs are just around the corner, these CPUs are still based on older Skylake tech and their performance won’t be much of an increase over Gen10, which wasn’t all that much better than Gen9.

    While proper GPU utilisation is where Edius X should be, as has been stated in many other posts. I’m wondering if Edius X will utilise the new AMD CPUs, hence this post.

    Also. GV really should be answering these types of questions and other system based questions that appear on the forum. Edius simply doesn’t have anywhere near the market share of the other pro NLEs and as such there is nowhere near the same amount of user testing that happens with other NLEs. So, trying to find answers about real world use of Edius is nowhere near as easy as other pro NLEs, this is why GV should be far more active in supporting its users.

    Any input from GV would be greatly appreciated.
    "There's only one thing more powerful than knowledge. The free sharing of it"

    If you don't know the difference between Azimuth and Asimov, then either your tapes sound bad and your Robot is very dangerous. Kill all humans...... Or your tape deck won't harm a human, and your Robot's tracking and stereo imagining is spot on.

    Is your Robot three laws safe?

  • #2
    您和我一样,我也是用的AMD 5950X,但是我显卡是GTX1050,因为3080编码解码单元并没比1050有多大变化,已经被我使 用了一段时间卖出了。intel的UHD630核心显卡只对8或10bit 420 H264 265解码加速有用,422无效,还有其他格式也无能为力。正在等待EDIUS对AMD显卡或者NVIDI A编码解码器适配,而不要只会靠intel的QSV技术上赖着不走。除非适配新硬件,不然我是不会花钱升级 EDIUS X的 ,目前正在用EDIUS 9.52。希望EDIUS越做越好。同时希望新的EDIUS加入美白 磨皮 瘦脸 和变形器功能。

    Comment


    • #3

      Just thought a Google translation might be helpful


      You and me, I also use AMD 5950X, but my graphics card is GTX1050, because the 3080 codec unit has not changed much from 1050, it has been sold by me for a while.
      Intel's UHD630 core graphics card is only useful for 8 or 10bit 420 H264 265 decoding acceleration, 422 is invalid, and other formats are also incapable.
      We are waiting for EDIUS to adapt to AMD graphics cards or NVIDI A codecs, instead of relying on Intel’s QSV technology.
      Unless I adapt to new hardware, I will not spend money to upgrade EDIUS X. I am currently using EDIUS 9.52.
      I hope EDIUS will do better and better.
      At the same time, I hope that the new EDIUS will add the functions of whitening, dermabrasion and face reduction and deformer.
      Edius 8.53WG, Vistitle 2.8, Windows 10 x64 Pro Fall Update, Asus Z87 Pro, Intel i7-4770K, 16 GB 1600 Corsair Vengence LP RAM, Samsung 840 Pro SSD 256GB, WD Black 2TB media drive, Intel HD 4600 GPU, MSI GTX660 2GB VGA, Coolermaster Silencio 652 case, Noctua NH-U12S CPU cooler, Cakewalk UA-25EX USB audio interface, Cakewalk MA-15D monitor speakers, BM Intensity Pro 4K, PlextorPX-LB950SA BD writer, Dell U2410 Monitor

      Comment


      • #4
        Originally posted by John Hooper View Post

        Just thought a Google translation might be helpful


        You and me, I also use AMD 5950X, but my graphics card is GTX1050, because the 3080 codec unit has not changed much from 1050, it has been sold by me for a while.
        Intel's UHD630 core graphics card is only useful for 8 or 10bit 420 H264 265 decoding acceleration, 422 is invalid, and other formats are also incapable.
        We are waiting for EDIUS to adapt to AMD graphics cards or NVIDI A codecs, instead of relying on Intel’s QSV technology.
        Unless I adapt to new hardware, I will not spend money to upgrade EDIUS X. I am currently using EDIUS 9.52.
        I hope EDIUS will do better and better.
        At the same time, I hope that the new EDIUS will add the functions of whitening, dermabrasion and face reduction and deformer.
        Thanks John.

        I'd already used the Google translation but it felt as if there may be a lot going missing during it. Plus, I'd made it very clear that my post was not to do with QS and h.264/5, yet the response is very focused on GPU/h.264/5 and not the CPU processing of the new AMD CPUs that I'd clearly asked about.

        Given the lack of response it does seem that there's very little experience among the Edius community with regard using the 5950X CPU and how it stacks up against 9th and 10th Gen Intel for pure CPU related tasks in Edius 10.

        Anton if you are watching. How do the high core count X series Intel CPUs stack up against the typical 9th/10th Gen desktop CPUs. I appreciate there's no iGPU/QS on the X series but just the same as my AMD question is concerned, what is the pure processing like and does Edius 10 utilise all the cores and scale with the the higher X series core counts?

        Again and just to reiterate the point. GV really should be giving examples of such things or at the very least give answers to some of these very simple questions.

        Edius users shouldn't have to be playing hit or miss with systems specs for Edius and the potential waste of money involved. Nor should they have to form self help groups either.

        Cheers,
        Dave.
        "There's only one thing more powerful than knowledge. The free sharing of it"

        If you don't know the difference between Azimuth and Asimov, then either your tapes sound bad and your Robot is very dangerous. Kill all humans...... Or your tape deck won't harm a human, and your Robot's tracking and stereo imagining is spot on.

        Is your Robot three laws safe?

        Comment


        • #5
          Dave,

          Happy New Year. I am interested in this processor as well. Since it sounds like you have one on order, when it arrives, you will be our sole source of information. I look forward to hearing about your results. I know you will give a thorough account of the performance in the absence of any company support. Please keep us updated. Thanks.
          Asus Prime X299-A - Intel i9 7900x all cores @4.3GHz 2 cores @4.5GHz - 32GB RAM - NVidia GTX1070 - Edius 9 WG - BM Intensity 4k - Boris RED - Vitascene 2 - Windows 10

          Comment


          • #6
            Originally posted by Bassman View Post
            Dave,

            Happy New Year. I am interested in this processor as well. Since it sounds like you have one on order, when it arrives, you will be our sole source of information. I look forward to hearing about your results. I know you will give a thorough account of the performance in the absence of any company support. Please keep us updated. Thanks.
            Hi Tim, all the best for the coming year.

            This is actually the second 5950X that I’ve had on order, the first order didn’t seem to move with regard its position in the pre-order queue so I cancelled it. That said, the second one doesn’t seem to be fairing much better. Although I could get a 5900 more easily and despite the lack of proportional value between the two as far as cost per core is concerned. At this point I’m more interested in every last drop of processing power for Edius, with cost for the CPU (to a certain degree) not being such a factor.

            The idea was to build a new machine with an RTX 3080 and the 5950X as I would definitely benefit from those components with certain PC tasks and hopefully would get a nice surprise with Edius. The reality has been quite a bit different :)​​​​​ I couldn’t get the 3080 or even a 3090, so I’ve ended up with a 3070. Not that the 3070 is bad or anything, it’s actually a nice card and better than my older 2080 ti in a number of ways. Plus, for the best part these high end cards are a complete waste in Edius anyway, as it does not scale with GPU ability even with the hand full of tasks it has that are GPU assisted and there’s no difference between a GTX 1660 and RTX 3090 as far as Edius’ new NVENC ability is concerned.

            So for now I’m using a 9900K and the RTX 3070. I didn’t see the sense in going for the minor upgrade to a 10900K as it would have also meant a new motherboard as well with no increase in PCIe bandwidth or speed.

            The big problem I have, which is something that’s not wasted on you and other serious Edius users, is that there’s absolutely no information what so ever from GV as to the state of play with regard system specs for Edius X.

            Why, as Edius users, do we not have a guide from GV that tells us what type of typical processing/real-time render abilities we should expect to see with various CPUs?

            Is there an advantage or not going to a higher core count X series Intel CPU over the usual desktop CPUs, regardless of lack of iGPU/QS?

            Indeed, is it the case that the magic source for Edius is Intel’s QS on the appropriated desktop CPUs and it’s QS rather than the CPU’s processing power/cores/threads that gives us our perceived performance boost?

            With the previous point it’s obviously H.265 and H.264 that benefit from this, which will be the most widely used codecs used with Edius or any NLE but what does this mean for the performance of other codecs that rely on the CPU?

            In the past Edius didn’t seem to scale at all well with multi processor system, is this still the same with Edius X when using architecturally compatible Xeons?

            Traditionally, Edius would seem to favour CPU speed on the main core/s and almost ignore the extra multiple cores/threads, is this still the same with Edius X?

            Is Edius’ own internal processing still weighting certain activities toward multiple cores, while other internal activities aren’t? (As in, codec processing/handling compared to general NLE processing and exporting/encoding etc.)

            I use the most powerful GPUs with Edius and still get dropped frames on the playback UI with high resolution/frame rate projects, where’s GV’s GPU recommendation for this issue? (BTW before anyone bleats on about a dedicated video output, try telling that to someone working in the field on a laptop or to someone who doesn’t need to do that with other NLE’s)

            And with regard AMD, what are the difference between their new CPUs and Intel ones with regard processing abilities with Edius X, given that they all adhere to the basic CPU architecture required to run Edius?

            Seriously, what type of manufacturer sells $1000 NLE software and doesn’t even give the buyer any clue what so ever as to the system requirements for their latest version compared to all the previous versions? (Remember, Edius 10 is supposedly something new ‘under the hood’, so surely it’s system requirements are going to be different than previous versions)(BTW yes GV gave an indication as to certain instruction sets required for running 10 but that’s a far cry from recommended system specs for specific edit scenarios, or even a minimum system spec)

            Also, what type of manufacturer sells their latest upgrade on the song and dance of a handful of useful features but doesn’t explain that one of those features becomes absolutely defunct above a certain resolution and that’s after using a certified system from the previous version of their flagship software? (I’m talking about this https://forum.grassvalley.com/forum/...ius-x-above-4k )

            And to the Elephant, what are the real differences between 9 and 10?

            Given that 9 can now do NVENC (which BTW is a fantastic addition by GV for 9 users) and I’ve proven that Edius 9 and 8 can edit 8K, what is the difference with 10?

            Finally, just to round off these few questions, when are we going to see proper/full GPU utilisation? (Or just be honest and tell us that it’s not going to happen and let people decide if it’s best for them to move on to another NLE. Now that it would appear that GV have gotten to the end of all the extra tinkering and bolt-ons that they can do with Edius’s core architecture that was quite deliberately designed to use CPU and never never designed to use GPU. Some may think that last statement was hyper critical but it’s a known fact that Edius historically derived it its real-time advantage by being purposefully and deliberately written to take full advantage of CPU processing so it could scale with better and more powerful CPUs further into its future. That and a restricted chroma space and bit depth compared to the historic industry leader, Avid. Please, someone from GV tell me that I’m wrong on that point, I would actually be the happiest person ever out of all the people that have made a complete fool of themselves and been proven wrong)

            The list of questions is quite endless with regard Edius X and that’s just mine, I’m sure there are many other users out there with many more questions.

            Or maybe your typical Edius user isn’t quite as sophisticated as those using typical industry standard NLEs and don’t really care for or understand the points being made about their chosen NLE. BTW those without a sense of humour just ignore that last sentence :)

            Happy new year everyone and I hope your next 12 months are better than your last 12 months.

            Stay safe and healthy.

            Cheers,
            Dave.
            "There's only one thing more powerful than knowledge. The free sharing of it"

            If you don't know the difference between Azimuth and Asimov, then either your tapes sound bad and your Robot is very dangerous. Kill all humans...... Or your tape deck won't harm a human, and your Robot's tracking and stereo imagining is spot on.

            Is your Robot three laws safe?

            Comment


            • #7
              You all might need to realize that GV/Black Dragon are focused on news and sports gathering/studio suites, not small business users and sole proprietorships. We are the unwanted stepchild. I realize that is where the money is and as such don't expect much from any of the GV representatives on here. I envision us being slowly weaned away as users of Edius. How many news studios do you think use the other programs for editing? PPro, Vegas, etc.. Not many but that is just a guess. But comparing Edius to the other low end applications and thinking that GV is worried or even cares about this market is a waste of energy. My opinion is based on the responses from GV towards it's legacy customers.
              Edius 8 Workgroup, Intel 3770K, Asus P8Z77-V Motherboard, 32GB DDR3 1600 ram, SSD for C, external Raid box with WE RE4 1TB drives Raid 0 for video assets. Overclocked 4.3ghz, Asus GTX 660 ti, water cooled system. Windows 10 Professional 64bit

              Comment


              • #8
                I think you may be correct Bruce. But I do understand that it is important for them to focus on the main revenue source that includes hardware and software. It will be difficult for them to compete against BlackMagic who sell more mid range hardware as well as high end and effectively give the software way with the hardware. I just got a Speed Editor which in a twist is given away free with a copy of Resolve Studio. It is restricted in its use to just Resolve 17 so locking hardware to software. I am a long time user of EDIUS ( from the beginning ) as well as Vegas ( from its beginning too ) and Resolve from Resolve 14 EDIUS has introduced a few things for Windows users not really available on the other platforms. ProRes RAW and ProRes export. Neither of which I use of course but they are there. Other than that they have fallen way behind Vegas and Resolve in capability over the last few years. I upgraded to EDIUS X in the hope they would catch up and still hope this is true.

                As to multicore CPU my Threadripper 12 core is marginally better than my 4790K 4 core all by itself with QS in EDIUS X. Threadripper and 1080Ti fly with Vegas or Resolve there is unfortunately a considerable performance difference.
                Ron Evans

                Threadripper 1920 stock clock 3.7, Gigabyte Designare X399 MB, 32G G.Skill 3200CL14, 500G M.2 NVME OS, 500G EVO 850 temp. 1T EVO 850 render, 6T Source, 2 x 1T NVME, MSI 1080Ti 11G , EVGA 850 G2, LG BLuray Burner, BM IP4K, WIN10 Pro, Shuttle Pro2

                ASUS PB328 monitor, BenQ BL2711U 4K preview monitor, EDIUS X, 9.5 WG, Vegas 18, Resolve Studio 17


                Cameras: GH5S, GH5, FDR-AX100, FDR-AX53, DJI OSMO Pocket, Atomos Ninja V x 2

                Comment


                • #9
                  Thanks Dave. I and other are feeling your pain. I do not want to bash on GV/Edius as we have all been there before. It is frustrating as the months go by. My hope for the 5950X is that with a decent overclock one could expect all cores in the upper 4Ghz and single thread at 5Ghz. All of this paired with a ton of L3 cache and plenty of PCI lanes. My 7900X has been a solid performer but 4k brings a lot of demands. I will be interested in seeing if the Threadripper variant of these current chips is any better for us or not. It seems Edius likes single thread performance but this is a problem with Intel systems as their consumer motherboards are limited on PCI lanes. So a mismatch for usage. The AMD specs seem like they fit our uses much better but the performance needs to be there. Imho, 16 cores is plenty for what most of us do.

                  So, I hope your part comes in and you can shed the light where the company has fallen short. If GPU assist ever shows up, Edius paired with a modern CPU will be set for quite some time in the future.
                  Asus Prime X299-A - Intel i9 7900x all cores @4.3GHz 2 cores @4.5GHz - 32GB RAM - NVidia GTX1070 - Edius 9 WG - BM Intensity 4k - Boris RED - Vitascene 2 - Windows 10

                  Comment


                  • #10
                    I do not think you need to overclock the AMD parts as with their normal boost clock the 5900 is 4.8Ghz and 5950 is 4.9Ghz. I think the 5900 is a little faster than the 5950 as it is the one picked for games. They are both faster than any of the Intel parts.
                    Last edited by Ron Evans; 01-02-2021, 01:54 AM.
                    Ron Evans

                    Threadripper 1920 stock clock 3.7, Gigabyte Designare X399 MB, 32G G.Skill 3200CL14, 500G M.2 NVME OS, 500G EVO 850 temp. 1T EVO 850 render, 6T Source, 2 x 1T NVME, MSI 1080Ti 11G , EVGA 850 G2, LG BLuray Burner, BM IP4K, WIN10 Pro, Shuttle Pro2

                    ASUS PB328 monitor, BenQ BL2711U 4K preview monitor, EDIUS X, 9.5 WG, Vegas 18, Resolve Studio 17


                    Cameras: GH5S, GH5, FDR-AX100, FDR-AX53, DJI OSMO Pocket, Atomos Ninja V x 2

                    Comment


                    • #11
                      I agree Ron, that is one of the best aspects of AMD's approach. But, I read about some kind of dynamic boost overclock that would keep all of the cores at an elevated speed and also allow the normal single threaded boost. It resulted in a higher operating speed of all the cores plus more single thread which sounds great. Basically boost all the time.
                      Asus Prime X299-A - Intel i9 7900x all cores @4.3GHz 2 cores @4.5GHz - 32GB RAM - NVidia GTX1070 - Edius 9 WG - BM Intensity 4k - Boris RED - Vitascene 2 - Windows 10

                      Comment


                      • #12
                        Originally posted by Bassman View Post
                        Thanks Dave. I and other are feeling your pain. I do not want to bash on GV/Edius as we have all been there before. It is frustrating as the months go by. My hope for the 5950X is that with a decent overclock one could expect all cores in the upper 4Ghz and single thread at 5Ghz. All of this paired with a ton of L3 cache and plenty of PCI lanes. My 7900X has been a solid performer but 4k brings a lot of demands. I will be interested in seeing if the Threadripper variant of these current chips is any better for us or not. It seems Edius likes single thread performance but this is a problem with Intel systems as their consumer motherboards are limited on PCI lanes. So a mismatch for usage. The AMD specs seem like they fit our uses much better but the performance needs to be there. Imho, 16 cores is plenty for what most of us do.

                        So, I hope your part comes in and you can shed the light where the company has fallen short. If GPU assist ever shows up, Edius paired with a modern CPU will be set for quite some time in the future.
                        • Hi Tim.

                          I totally agree with what you’re saying, it’s just very frustrating not knowing if and where the performance boosts come from. My main question is probably, does Edius X take full advantage of all the cores on the 5950?

                          As you’ve said though, Edius has traditionally benefited from single thread performance. It’d be very helpful to find out if this is still the case with Edius X, or does that change with different CPUs? If nothing much has changed with Edius, then I suppose the next 11th Gen Intel Core CPUs are going to be the better CPU.

                          I think the major issue to working out a lot of these things is H.264/H.265. Because most users, even TV stations, will be using these codecs, the effect of QuickSync is probably more noticeable than the power of the CPU. It’s easy to see why someone may think a particular Intel CPU is more powerful than a particular AMD CPU, if they’re using H.264/5 source footage.

                          Because H.264/5 will simply not decode anywhere near as well/efficiently on any AMD CPU in Edius compared to a QS based Intel, it’s very easy to assume that the AMD CPU isn’t as powerful as the Intel one. Because the decoding of the H.264/5 codec has to happen first, even before any playing/editing/FX etc. are applied. At this point the AMD CPU will still definitely look slower. Although aside from the initial decoding of the H.264/5 codec, the AMD CPU may well have more processing potential compared to the Intel CPU, which simply can’t be seen due to the initial decoding stumbling block. In fact, I’m quite sure this exact same thing is observed when using Intel CPUs that don’t have the iGPU and QS.

                          Actually Tim. I’m going to start a new post about what I’ve just explained and give some examples of how people can test for true CPU performance. If enough people engage in the other post then maybe amongst the users we can get some answers and pointers.

                          Assuming that I’m on the right track and not barking up the wrong tree and aside from H.264/5 and AMD CPUs are actually more beneficial than Intel for pure YUV(buffer) processing in Edius, there could be a big boost to Edius that may not be so difficult to implement. While I have my doubts that we will ever see a truly GPU optimised version of Edius, I do think that the we will see H.264/5 decoding handed off to discreet GPUs. I suggest this with some confidence as we already have this function with some of the esoteric codecs in Edius. Now, if multicore AMD CPUs are indeed more beneficial to Edius, either desktop or workstation CPUs, and the only thing holding them back is H.264/5 decoding. Then adding discreet GPU H.264/5 decoding to the mix and using AMD CPU processing could be a great Edius option. If this is something of a future option, I doubt we’ll see AMD GPU H.264/5 decoding, I’d assume that would be Nvidia.

                          The sad thing here though is that all this frustration and speculation could easily be answered with just a few simple questions being addressed by GV. To be honest Tim, talking through these posts, my frustration is slowly turning to anger. Honestly, it really does feel as if GV are getting a kick out of watching us all acting like headless chickens. There is absolutely no need for them to be so hands off when dedicated users are trying to better their Edius setups.

                          Cheers,
                          Dave.
                        "There's only one thing more powerful than knowledge. The free sharing of it"

                        If you don't know the difference between Azimuth and Asimov, then either your tapes sound bad and your Robot is very dangerous. Kill all humans...... Or your tape deck won't harm a human, and your Robot's tracking and stereo imagining is spot on.

                        Is your Robot three laws safe?

                        Comment


                        • #13
                          Hey Dave, well put. I agree, this all comes down to two areas for me - playback performance (decoding) and output performance (encoding). Playback performance is what we all bitch about and output performance is something we can work around! Yes, if the only thing GV does is implement some kind of playback (decode) assistance we will all be happy. I have often wondered why Intel or even motherboard manufacturers have not created a decode chip that acts like a permanent Quicksync for video editors. Seems like a low cost part to add but it seems they don't care to address this market.

                          I believe the 5950X will excel at output (encoding) speeds. I think Edius would utilize all 16 cores for export and we would al be happy with the time savings. So if there was just some help on the decode side we would be golden. I like many others can't run an Intel consumer motherboard with only 16 or 20 PCI lanes with output cards and M.2 drives all vying for spots. So I am outside of the QS reach to begin with. For multicamera shoots, I am considering buying 4TB of NVMe storage to be used exclusively for HXQ renders. I figure it would be the best for workflow to import footage, trim, align, maybe color correct then batch export HQX files to the NVMe. Then import the HQX files and make a sequence for each camera with both original and HQX so I could still edit the project once the HQX files are deleted. This is sort of a large temp drive approach.

                          This is a workflow that exists today for $500 and would be better than any decode options ever would be. HQX edits so nicely as long as you have the storage speed that it takes the pressure off of needing a mega processor for playabck. Then processor decisions could be made more around output which puts us inline with the industry which is all about cores and more cores.

                          This approach can also be done with Proxys as you know. I come from this viewpoint being frustrated with multicam playback, needing to now shoot my shows in 1080p60 and kind of knowing/expecting GPU assist would have probably already been announced if it was in the cards. I prefer the HQX workflow as they do not take very long to export and they edit so well once imported. So in my view, a 5950X along with fast storage and HQX files would be a dream system for Edius!
                          Asus Prime X299-A - Intel i9 7900x all cores @4.3GHz 2 cores @4.5GHz - 32GB RAM - NVidia GTX1070 - Edius 9 WG - BM Intensity 4k - Boris RED - Vitascene 2 - Windows 10

                          Comment


                          • #14

                            Originally posted by Bassman View Post
                            Hey Dave, well put. I agree, this all comes down to two areas for me - playback performance (decoding) and output performance (encoding). Playback performance is what we all bitch about and output performance is something we can work around! Yes, if the only thing GV does is implement some kind of playback (decode) assistance we will all be happy. I have often wondered why Intel or even motherboard manufacturers have not created a decode chip that acts like a permanent Quicksync for video editors. Seems like a low cost part to add but it seems they don't care to address this market.

                            I believe the 5950X will excel at output (encoding) speeds. I think Edius would utilize all 16 cores for export and we would al be happy with the time savings. So if there was just some help on the decode side we would be golden. I like many others can't run an Intel consumer motherboard with only 16 or 20 PCI lanes with output cards and M.2 drives all vying for spots. So I am outside of the QS reach to begin with. For multicamera shoots, I am considering buying 4TB of NVMe storage to be used exclusively for HXQ renders. I figure it would be the best for workflow to import footage, trim, align, maybe color correct then batch export HQX files to the NVMe. Then import the HQX files and make a sequence for each camera with both original and HQX so I could still edit the project once the HQX files are deleted. This is sort of a large temp drive approach.

                            This is a workflow that exists today for $500 and would be better than any decode options ever would be. HQX edits so nicely as long as you have the storage speed that it takes the pressure off of needing a mega processor for playabck. Then processor decisions could be made more around output which puts us inline with the industry which is all about cores and more cores.

                            This approach can also be done with Proxys as you know. I come from this viewpoint being frustrated with multicam playback, needing to now shoot my shows in 1080p60 and kind of knowing/expecting GPU assist would have probably already been announced if it was in the cards. I prefer the HQX workflow as they do not take very long to export and they edit so well once imported. So in my view, a 5950X along with fast storage and HQX files would be a dream system for Edius!
                            Hi Tim.

                            Funny you mention the idea of having a QS chip or something similar actually built on the MB, maybe the chipset. I'd always thought that a specific NVENC card by Nvidia would be a great idea, no GPU or video output just a beefy NVENC processor, just like you've suggested about QS but on a PCIe card. Maybe the reason why we've not seen such things is because both are proprietary technology and the licenses needed for their use, or maybe what you and I think are great ideas are just wishful thinking by fringe users of the tech and it doesn't make much financial sense :)

                            About your PCI bandwidth and storage quandary. I'm way behind on a load of my YouTube videos including a bunch of Edius and specific storage stuff (I was suffering from a stupid infection that had me bedridden for almost two weeks). I've been doing some tests that I'll be doing videos about, to do with certain SSD storage and specifically their speeds for editing. I'd moved to NVMe a while back and have been using Samsung 970 plus drives. While these are great drives, averaging around 3500MB/s R/W, they only seem to go up to 2TB, or at least that's all I've found. So I decided to buy a 4TB QVO SATA drive thinking it would just be a reasonable way of having 4TB of mixed use SSD space inside my main PC. To my surprise, It's been great as an editing SSD and so far there's been no difference between it and my 970 Plus drives as far using them for editing etc. Even though this is a QVO and not even an EVO, there's been no bottlenecking due to the drives when editing, even with high bitrate multiple HQX streams. In this instance the system processing, CPU/GPU, has been the first thing to throw its hand in. Of course when doing file dumps etc. there's no comparison, the NVMe drives are about 6 to 7 times faster but file dumping etc. is such a small part of my workflow compared to the time spent in Edius. Plus, this is all quite a bit more of an issue for me as I'm only on the limited PCIe bandwidth/lanes of a typical Intel Core desktop CPU setup.

                            As soon as I catch up on my YouTube videos I'll do some of these SSD speed things. Any that are specific to Edius I will post here in the hardware forum, any videos that are not specific to Edius but may be of interest due to the SSD and speed topic, I'll post in the lounge.

                            BTW. I've made a new post seeing if other users are interested in jointly testing some stuff to see we if we can collectively work out some answers to a bunch of the questions that keep getting asked https://forum.grassvalley.com/forum/...and-intel-cpus . Basically, a self help group :)

                            Cheers,
                            Dave.





                            "There's only one thing more powerful than knowledge. The free sharing of it"

                            If you don't know the difference between Azimuth and Asimov, then either your tapes sound bad and your Robot is very dangerous. Kill all humans...... Or your tape deck won't harm a human, and your Robot's tracking and stereo imagining is spot on.

                            Is your Robot three laws safe?

                            Comment


                            • #15
                              I shoot UHD 60P with GH5 and GH5S my wife HD with the AX100 so 1920x1080 timeline so that I can zoom and pan etc. With my Threadripper single UHD track is just possible with preview at 1/2. CPU is shown at 100% with GPU sometimes at 45% ( though have no idea what it is doing ) Sad since Resolve Studio will run UHD multicam 2 tracks full speed at full preview with CPU at 16% and GPU at 70% with both these camera h264 150Mbps files are on a normal Seagate 6T hard drive and it is then operating at 57%. This is with GPU decode. With GPU decode off it is similar to EDIUS though will run 2 track multicam CPU usage is lower at 65% and it does manage about 45fps. Without GPU decode both EDIUS and Resolve need optimized media of some sort to run multicam with UHD files. Sothe free version of Resolve may not give much more than EDIUS. On a HD timeline with now three tracks two UHD and one HD, with decode switched on Resolve will run multicam full speed CPU at 17%, GPU at 84% HD 55% ( all files on this Seagate HD ) so you can see all the performance comes from the GPU and why Resolve Studio can support multiple GPU's though they need to be identical models to work properly. This is where I hope EDIUS is going too.
                              Ron Evans

                              Threadripper 1920 stock clock 3.7, Gigabyte Designare X399 MB, 32G G.Skill 3200CL14, 500G M.2 NVME OS, 500G EVO 850 temp. 1T EVO 850 render, 6T Source, 2 x 1T NVME, MSI 1080Ti 11G , EVGA 850 G2, LG BLuray Burner, BM IP4K, WIN10 Pro, Shuttle Pro2

                              ASUS PB328 monitor, BenQ BL2711U 4K preview monitor, EDIUS X, 9.5 WG, Vegas 18, Resolve Studio 17


                              Cameras: GH5S, GH5, FDR-AX100, FDR-AX53, DJI OSMO Pocket, Atomos Ninja V x 2

                              Comment

                              Working...
                              X