Announcement

Collapse
No announcement yet.

About EDIUS using 11th igpu poor performance and 12th igpu crashes problem.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by noafilm View Post
    It's better to keep the conversation about edius in this thread because before you know it it will be moved to the "lounge" again, since Edius can't make use of the M1 chip on a Mac it's no use to make any comparisons and this is still a Edius/Windows forum.
    The OP specifically mentioned Premiere and also M1 with reference to comparative data "Premiere uses OpenCL interface uhd750 / 770 hardware decoding,It has decoding performance similar to that of Apple M1". All I was doing was supplying further information with regard other technology, just as the OP had mentioned.

    However, I agree, let's not have this conversation moved to the lounge. Your efforts deserve to be seen by anyone visiting the forum and not hidden in the lounge.

    BTW. Is there anyway that your graph can be embedded/inserted differently into your main test post? Only logged in forum users can see that info and it would be really beneficial for anyone (not a forum user) who's casually browsing the posts.

    "There's only one thing more powerful than knowledge. The free sharing of it"


    If you don't know the difference between Azimuth and Asimov, then either your tapes sound bad and your Robot is very dangerous. Kill all humans...... Or your tape deck won't harm a human, and your Robot's tracking and stereo imagining is spot on.

    Is your Robot three laws safe?

    Comment


    • #32
      Originally posted by noafilm View Post

      I you really want to know and it's the last comparison test I"ll make with other NLE's:

      edius x took 6min for HQ and HQX, the CPU usage was only 40%
      resolve studio 17 on the same pc took 4min for HQ and HQX, CPU usage was 30% GPU 70%

      That's great, thanks. Let's put all that to bed now, between my extra info and what you've just posted, there's well enough comparative data for those interested. If anyone would like to further those comparisons with other NLEs etc. I'd suggest starting a new post.

      "There's only one thing more powerful than knowledge. The free sharing of it"


      If you don't know the difference between Azimuth and Asimov, then either your tapes sound bad and your Robot is very dangerous. Kill all humans...... Or your tape deck won't harm a human, and your Robot's tracking and stereo imagining is spot on.

      Is your Robot three laws safe?

      Comment


      • #33
        Originally posted by noafilm View Post
        I came across a bloggers website who owns a 12900k and has Edius 9 and X, his previous processor was a 9900K. He/she posted a quite extensive benchmark test for exporting a 1 hour file. The file used is available for download, it’s 10 minutes long, it was duplicated until there was a one hour timeline, I just used the 10 min file a recalculated how long an hour should take to export.

        So, I decided to replicate the tests with my 5950x with the exact same project and export settings as it might give better understanding of the current issues with 12900k processors and UHD 770, considering there is hardly any info on this forum nor any specific CPU recommendations from GV then these user reports might help, I think the most important part of this test is that it shows what issues Edius has with the 12900k at this moment, eventhough GV currently recommends to disable Quicksync for this processor the test shows it is usable for H.264 exports.

        Edius own exporter was used with “superfine” as quality setting which does add quite a lot of exporttime compared to the “normal” setting, more info on settings in the attached file.

        Attached are the testresults from both systems, I hope I took the correct data out of the blog as I had to translate from Chinese and I found some small discrepancies between the test results as described for each separate export test and the testresults in the excel sheet posted at the end of the post, some things also might have gone lost in the translation, but I think it's accurate enough, if there is wrong data then feel free to point that out. The file used to test was also a bit weird as it only had a 6mbs bitrate but since the same file for both systems was used to test the testresults are comparable. Personally I rather would have had a 4K native file directly from a camera.

        Both used systems (12900K/5950x) are quite similar in performance, only the GPU was a 1050 vs a 1060 so not sure how much difference that will make and ddr4 memory was used on the 12900k, ddr5 should give it a performance jump.


        Something interesting we both experienced was with h.265 export using the CPU only, he/she had to cancel the export as it appeared to take ages on the 12900k, I had the same problem on the 5950x and tested a one minute file only to calculate afterwards how long it would take for a 1 hour file to export. In my case that was 5,5 hours for the “superfine” quality setting, if I select the “normal” setting then it only takes 21 minutes. On the 12900K it was expected to take over 8 hours. Here the Superfine quality setting together with H.265 CPU encode only doesn’t seem to work like it should.

        Another issue I noticed with the “superfine” vs “normal” setting, if I export H.264 or H.265 on the 5950x with the CPU only the CPU usage also drops about 20% with superfine which again has an effect on exporttimes.
        When the combination of CPU/GPU (nvidia export) on the 5950x was used then this was not the case.

        This user does not seem to have lower performance using quicksync with the UHD770 as has been mentioned on this forum before with h.264, it’s considerable faster then the 9900k so it looks like quicksync in this case performs as expected.

        Nvida exports on the 12900K are much slower for h264 and h265 compared to the 5950x, not sure if the gtx1050 vs gtx1060 makes such a big difference or if Edius is the issue.

        Quicksync export for h.265 is also slow on the 12900k, this seems to be caused by 2 issues, a very low CPU usage and wrong assignment of “e” and “p” cores, something that is also mentioned in the test blog (see link below)

        H264 exports with CPU only is twice as slow on edius X compared to Edius 9 on the 12900k, the 5950x does not have this issue.


        The site I was mentioning can be found here: https://edit-anything.com/edius/ediu...12gen-cpu.html
        And the testfile can be download from here for those who want to try: https://edit-anything.com/blog/tvmv6-i9-9900k.html

        test.png



        About decoder:
        Hello, it's great that you can find other people's comments.For decoding speed(DaVinci and Premiere has an error range of about 10-20%):
        if UHD630=100% ,UHD750=UHD770=190%(EDIUS can only reach 30%-35%);RTX3080=GTX1650S=160% ;GTX1080=GTX1060=GTX1050=145%;Vega8=rx560=65%;RX57 00=HD630=80%;RX6600=110%.

        I saw AMD's new product launch. The decoding performance of 6000 series notebooks was improved by about 70%,Inferred results:RDNA 2 APU decode=gtx1050,Don't forget that NVIDIA (Game Gpu)has a limit of 2 Simultaneous work of encoding and decoding, which is very immoral.

        I don't have much time to test the coding speed. I care more about the playback performance than the rendering speed.

        About encoder:
        coding speed has always been NVIDIA > Intel > AMD, but the coding speed of rx6600 has exceeded that of gtx1050,AMD new graphics card,encoder and decoder has improved significantly.
        Last edited by [email protected]; 01-06-2022, 03:32 AM.
        CPU:AMD R9 5950X GPU:GTX1050 MEM: Micron 4G DDR4 2400x4
        motherboard:ASrock X570 matx
        SSD:intel p3600 sata HDD: HGST 8Tx5 raid0
        Power:Great Wall EPS2000BL 2000W
        OS WIN10 20H2

        Comment


        • #34
          Thanks for these comparisons Noa! I am mainly looking at the CPU only numbers. The Intel 12900 chip definitely has made some gains on the 9900 chip. Using Edius 9, the Intel and AMD are very close. When moving to Edius X the AMD is consistent (as expected) but the Intel falls apart. What is going on with Edius X and newer Intel chips? I would guess GV is aware of this and will be working on it.
          Asus Prime X299-A - Intel i9 7900x all cores @4.3GHz 2 cores @4.5GHz - 32GB RAM - NVidia GTX1070 - Edius 9 WG - BM Intensity 4k - Boris RED - Vitascene 2 - Windows 10

          Comment


          • #35
            Originally posted by Bassman View Post
            Thanks for these comparisons Noa! I am mainly looking at the CPU only numbers. The Intel 12900 chip definitely has made some gains on the 9900 chip. Using Edius 9, the Intel and AMD are very close. When moving to Edius X the AMD is consistent (as expected) but the Intel falls apart. What is going on with Edius X and newer Intel chips? I would guess GV is aware of this and will be working on it.
            Hi Tim.

            You should also bear in mind the source material being used for the tests. Not only was it only a very low bitrate 1080P 29.97 8Bit 4:2:0 AVC/H.264 file, but it wasn’t a camera file either.

            Once using 10Bit 4K H.265 59.94FPS camera files, with maybe even 4:2:2 chroma subsampling. There will be a lot of difference with same test configurations. Edius X, especially on an 12th Gen Intel CPU, will suffer worse. In this instance I’m sure the 5950 will perform comparatively even better and you’d also see a gain in 10th Gen Intel against 12th Gen.

            Although I’ve not tested it myself. I’ve a feeling that the earlier 9th and 10th Gen Intel CPUs with their UHD 630 iGPUs, will have an advantage over 11th and 12th Gen with their UHD 750 and UHD 770 iGPUs with the above 10Bit example. If I’m not mistaken, GV’s advise for switching off hardware decoding is only applicable to 12th Gen and not the previous iterations of the Intel CPUs using UHD 630.

            And yes, GV are aware of these Intel issues. Checkout their list of know issues and suggested workarounds for the latest release https://forum.grassvalley.com/forum/...69-v10-30-8291

            Cheers,
            Dave.

            "There's only one thing more powerful than knowledge. The free sharing of it"


            If you don't know the difference between Azimuth and Asimov, then either your tapes sound bad and your Robot is very dangerous. Kill all humans...... Or your tape deck won't harm a human, and your Robot's tracking and stereo imagining is spot on.

            Is your Robot three laws safe?

            Comment


            • #36
              Originally posted by Bassman View Post
              What is going on with Edius X and newer Intel chips?
              One thing that Chinese/Japanese user mentioned was the "e" instead of "p" cores sometimes where used to encode, not sure if that is a Windows or Edius issue but that also has a impact on encoding times, this was also one of the reasons not to go for the 12900K at this moment, I have seen to many problems in the past getting quick sync to work properly and now it has come to a point that it's not working like it should, also the fact the 12900k is so new using these "e" and "p" cores means NLE"s need to be optimised to use those in the right way. I"m sure though that all of these 11/12th gen CPU issues will get solved eventually but I don't want to wait for that to happen as I need something that works now.

              Comment


              • #37
                Thanks and good point Noa. From the outside looking in, the whole "P" & "E" cores just looks like window dressing for a lack of performance to me. I am sure it is tough for software companies to keep up when chip makers come up with some crazy scheme. I do not think many users requested separate core designations as we would all rather have more full performance cores. Makes sense as the 5950X numbers were unchanged between Edius versions.
                Asus Prime X299-A - Intel i9 7900x all cores @4.3GHz 2 cores @4.5GHz - 32GB RAM - NVidia GTX1070 - Edius 9 WG - BM Intensity 4k - Boris RED - Vitascene 2 - Windows 10

                Comment


                • #38
                  It is interesting to look at my Threadripper when it is encoding HQX to MPEG in TMPGenc. Seems like it is only at 23% but looking at the individual cores one can see that one core is at 100% the others mainly at close to the 23%. I think that in the AMD auto boosting the fastest core is driven the hardest and then is the limit for the performance. The early parts I know had quite a performance difference between the cores. More likely in the single thread software of course. Also why some of the Intel CPU and gaming parts work really well when overlocked. Mine is not but does have the normal auto boost engaged. This may also be why the newer ZEN3 parts are a lot faster . I can see how the cores are driven with different software too. Other things can be done on the PC while one core is at 100% not the case then the GPU is maxed out !!
                  Ron Evans

                  Threadripper 1920 stock clock 3.7, Gigabyte Designare X399 MB, 32G G.Skill 3200CL14, 500G M.2 NVME OS, 500G EVO 850 temp. 1T EVO 850 render, 6T Source, 2 x 1T NVME, MSI 1080Ti 11G , EVGA 850 G2, LG BLuray Burner, BM IP4K, WIN10 Pro, Shuttle Pro2

                  ASUS PB328 monitor, BenQ BL2711U 4K preview monitor, EDIUS X, 9.5 WG, Vegas 18, Resolve Studio 18


                  Cameras: GH5S, GH6, FDR-AX100, FDR-AX53, DJI OSMO Pocket, Atomos Ninja V x 2

                  Comment


                  • #39
                    Originally posted by Ron Evans View Post
                    It is interesting to look at my Threadripper when it is encoding HQX to MPEG in TMPGenc. Seems like it is only at 23% but looking at the individual cores one can see that one core is at 100% the others mainly at close to the 23%. I think that in the AMD auto boosting the fastest core is driven the hardest and then is the limit for the performance. The early parts I know had quite a performance difference between the cores. More likely in the single thread software of course. Also why some of the Intel CPU and gaming parts work really well when overlocked. Mine is not but does have the normal auto boost engaged. This may also be why the newer ZEN3 parts are a lot faster . I can see how the cores are driven with different software too. Other things can be done on the PC while one core is at 100% not the case then the GPU is maxed out !!
                    1920x belongs to zen1,Amd zen1 does have many problems, avx2 is Poor performance. Zen3's performance is enough to cope with 10-12th Intel CPU.Intel is still squeezing toothpaste.
                    CPU:AMD R9 5950X GPU:GTX1050 MEM: Micron 4G DDR4 2400x4
                    motherboard:ASrock X570 matx
                    SSD:intel p3600 sata HDD: HGST 8Tx5 raid0
                    Power:Great Wall EPS2000BL 2000W
                    OS WIN10 20H2

                    Comment


                    • #40
                      Yes 1920 is old now but still works fine other than EDIUS though. I will update someday.
                      Ron Evans

                      Threadripper 1920 stock clock 3.7, Gigabyte Designare X399 MB, 32G G.Skill 3200CL14, 500G M.2 NVME OS, 500G EVO 850 temp. 1T EVO 850 render, 6T Source, 2 x 1T NVME, MSI 1080Ti 11G , EVGA 850 G2, LG BLuray Burner, BM IP4K, WIN10 Pro, Shuttle Pro2

                      ASUS PB328 monitor, BenQ BL2711U 4K preview monitor, EDIUS X, 9.5 WG, Vegas 18, Resolve Studio 18


                      Cameras: GH5S, GH6, FDR-AX100, FDR-AX53, DJI OSMO Pocket, Atomos Ninja V x 2

                      Comment

                      Working...
                      X