Announcement

Collapse
No announcement yet.

Cannot play any YouTube downloaded video in Edius X

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Cannot play any YouTube downloaded video in Edius X

    This seems to be a unique issue to a Xeon Multicore environment (dual 18 core CPU).

    Edius is not at all happy trying to play any YouTube MP4 clips on the timeline it pauses, waits where I can do nothing and then comes back to present the next frame under the cursor and of course won't play in real time. If we go up to UHD it almost crashes Edius trying to play the clips - white windows or window borders).

    Using the playback resolution down to 1/8th doesn't make any difference. It is almost like the software is looking for something and not finding it.

    It was the same with my previous hardware an HP Z800 with two 6 core Xeons back to Edius 7 then to Edius 8.

    It seems the PC's with i7's etc., can play these clips OK.

    Is there any improvement that can be sorted out for Xeon based PC's (non Quick Sync) for this issue?

    Mike
    Sys4: Z10PE-D16WS MB 2xE5-2696 Xeon 64 active logical cores. EWG9. 64G RAM. Aorus GTX1080Ti. 55" Q7 1500 NIT HDR 4K TV/Storm 3G Elite/Decklink 4K 12G/8CH audio monitoring, Yamaha RXA-870 A/V. Sys1-3 EWG8 + RX-E1+HDBX1000 MIP in HP xw8600 2 x X5492 CPU 8 cores, 8Gig RAM, Quadro FX3800. All sys Fibre to central media pool - 5TB Axus Yotta RAID + QLogic Fibre Switch. Central VCR rack plus YUV & audio to viewing room with Yamaha AX1 7.1 100 watt per channel amp with 1000W sub 63" HD 3D Samsung TV

  • #2
    Again, limitations of Edius that haven’t been addressed in X.

    The only joy you’ll get with such things is to use QuickSync.

    If GV where to introduce their own software codecs for H.264, H.265 etc. as they used to do in the past with certain other codecs, then Edius would be able to leverage CPUs properly for the decoding of such footage.

    For now, it’s really a case of QuickSync or nothing. That said, other GPUs could do this if GV give Edius that compatibility.

    Although what we are seeing is the direct result of GV relying on proprietary technology for core Edius functions, as in, using someone else’s hardware codec.

    Edius really needs its own software codecs to become more autonomous, like it was in the early days. This reliance on proprietary technology basically puts Edius users over a barrel, as in, not all Edius users can use all the codecs that Edius supports unless they all use the same limited hardware.
    "There's only one thing more powerful than knowledge. The free sharing of it"

    If you don't know the difference between Azimuth and Asimov, then either your tapes sound bad and your Robot is very dangerous. Kill all humans...... Or your tape deck won't harm a human, and your Robot's tracking and stereo imagining is spot on.

    Is your Robot three laws safe?

    Comment


    • #3
      Quite so I would love that to get fixed.

      Mike
      Sys4: Z10PE-D16WS MB 2xE5-2696 Xeon 64 active logical cores. EWG9. 64G RAM. Aorus GTX1080Ti. 55" Q7 1500 NIT HDR 4K TV/Storm 3G Elite/Decklink 4K 12G/8CH audio monitoring, Yamaha RXA-870 A/V. Sys1-3 EWG8 + RX-E1+HDBX1000 MIP in HP xw8600 2 x X5492 CPU 8 cores, 8Gig RAM, Quadro FX3800. All sys Fibre to central media pool - 5TB Axus Yotta RAID + QLogic Fibre Switch. Central VCR rack plus YUV & audio to viewing room with Yamaha AX1 7.1 100 watt per channel amp with 1000W sub 63" HD 3D Samsung TV

      Comment


      • #4
        I think it is a matter of EDIUS using the available resource ( CPU cores and GPU hardware ) The codecs are fixed by industry having their own doesn't work if no one else uses them. HQX can be used by most of the competing software but with the available CPU and GPU resource now there should be no need to transcode to an intermediate unless one is doing a large number of multicam type project. It is just that at the moment QS is the only hardware resource used by EDIUS to accelerate timeline playback. The new architecture of EDIUSX promises to provide the opportunity of using other hardware resource I just hope the wait isn't so long that people just move to other NLE's that already do this now.
        Ron Evans

        Threadripper 1920 stock clock 3.7, Gigabyte Designare X399 MB, 32G G.Skill 3200CL14, 500G M.2 NVME OS, 500G EVO 850 temp. 1T EVO 850 render, 6T Source, 2 x 1T NVME, MSI 1080Ti 11G , EVGA 850 G2, LG BLuray Burner, BM IP4K, WIN10 Pro, Shuttle Pro2

        ASUS PB328 monitor, BenQ BL2711U 4K preview monitor, EDIUS X, 9.5 WG, Vegas 18, Resolve Studio 17


        Cameras: GH5S, GH5, FDR-AX100, FDR-AX53, DJI OSMO Pocket, Atomos Ninja V x 2

        Comment


        • #5
          Originally posted by Ron Evans View Post
          The codecs are fixed by industry having their own doesn't work if no one else uses them.
          Hi Ron. The term codec applies not just to the standard but also to a particular filter that encodes/decodes. In this respect if Edius had its own software codec/filter, say for H.265, then it would be able to use that with a CPU and not have to rely on QS. For instance, if someone had a powerful CPU such as yours, if a software codec was being used by Edius then it would be able to effectively decode/edit H.265 using the CPU. In the past Canopus/GV had their own codecs within Edius, this made life easier at the time as Edius' future was always meant to be CPU bound. If I'm not mistaken, Edius had its own MPEG2 filter/codec and this is why back in the day it was so good with certain source material or captures. This is why we are having so much trouble now with Edius X, it's basically built for CPU usage and anything like QS or NVENC is just a bolt on that the original architecture wasn't designed to do. The lack of specific GV/Edius built in codecs is what's causing so many issues and why there's a massive disparity between using AMD CPUs or Intel CPUs without QS compared to an Intel CPU with QS.
          "There's only one thing more powerful than knowledge. The free sharing of it"

          If you don't know the difference between Azimuth and Asimov, then either your tapes sound bad and your Robot is very dangerous. Kill all humans...... Or your tape deck won't harm a human, and your Robot's tracking and stereo imagining is spot on.

          Is your Robot three laws safe?

          Comment


          • #6
            To me the codec is fixed by a standard. How the file is created to meet the standard ( encoded ) or read ( decoded ) is up to the software that is using it. This is where all the NLE's may well be different unless they use, for instance, dedicated embedded routines in hardware ( QS or NVENC etc ). It is a bit like doing a jigsaw puzzle. When finished all the participants have to have the same puzzle to meet the standard requirements. However, how they make the puzzle and how long they take will be different. Same to take puzzle apart. Could do one piece of puzzle at a time until the puzzle is complete ( single thread !! ) or get several sections done at the same time and then join together ( multithreaded !! ) etc. I know that is simplistic but I am sure you see the point. You can draw your own conclusions as to why EDIUS likes fast clocks with QS and doesn't seem to take too much advantage of high core counts.
            Ron Evans

            Threadripper 1920 stock clock 3.7, Gigabyte Designare X399 MB, 32G G.Skill 3200CL14, 500G M.2 NVME OS, 500G EVO 850 temp. 1T EVO 850 render, 6T Source, 2 x 1T NVME, MSI 1080Ti 11G , EVGA 850 G2, LG BLuray Burner, BM IP4K, WIN10 Pro, Shuttle Pro2

            ASUS PB328 monitor, BenQ BL2711U 4K preview monitor, EDIUS X, 9.5 WG, Vegas 18, Resolve Studio 17


            Cameras: GH5S, GH5, FDR-AX100, FDR-AX53, DJI OSMO Pocket, Atomos Ninja V x 2

            Comment


            • #7
              Originally posted by Ron Evans View Post
              To me the codec is fixed by a standard. How the file is created to meet the standard ( encoded ) or read ( decoded ) is up to the software that is using it. This is where all the NLE's may well be different unless they use, for instance, dedicated embedded routines in hardware ( QS or NVENC etc ). It is a bit like doing a jigsaw puzzle. When finished all the participants have to have the same puzzle to meet the standard requirements. However, how they make the puzzle and how long they take will be different. Same to take puzzle apart. Could do one piece of puzzle at a time until the puzzle is complete ( single thread !! ) or get several sections done at the same time and then join together ( multithreaded !! ) etc. I know that is simplistic but I am sure you see the point. You can draw your own conclusions as to why EDIUS likes fast clocks with QS and doesn't seem to take too much advantage of high core counts.
              The fact there are multiple ways to decode or encode a specific codec isn't fundamental to clocks and cores. Clocks and cores is something completely different and shouldn't be confused with the ability to do the fundamental job of decoding and encoding. BTW, the fast clocks don't have anything to do with QS. If it's present and being used by any CPU within the same generation, regardless of cores and speed, the speed of QS is the same. This is why higher core counts make no difference with QS. The same is true for NVENC, it'll be the same from a GTX1660 right up to a RTX 2080 ti and is also unaffected by the core count and speed of the host CPU. As for a general lack of high core count utilisation within Edius for other tasks, I haven't a clue why that is.

              If you take a look at other software there are options to choose what decodes a particular codec, be that hardware assisted or done entirely in software.

              Like I was saying, the term codec isn't just a reference to the standard but also what actually does the job.
              "There's only one thing more powerful than knowledge. The free sharing of it"

              If you don't know the difference between Azimuth and Asimov, then either your tapes sound bad and your Robot is very dangerous. Kill all humans...... Or your tape deck won't harm a human, and your Robot's tracking and stereo imagining is spot on.

              Is your Robot three laws safe?

              Comment


              • #8
                Sorry Dave I disagree, a codec is a codec defined by a standard. How it is implement is something for each company to decide. I did not suggest clock and cores in the way you took it. Read again. If you do not have QS or NVENC ( which is fixed by hardware I agree ) then the decode or encode will be governed by the software architecture and the speed of the CPU ( or GPU if it uses that ). IF the software is single threaded it will want the fastest single core speed it can get. In my simple example the CPU is placing one piece of the puzzle at a time. So the faster it can do that the quicker it will finish. We know EDIUS likes fast clocks. I do not think this is up for discussion and why Intel is the current preferred CPU regardless of QS as traditionally Intel has faster clocks and has been easier to overclock too. It is why Intel has been the leader for game PC's. As to NVENC, each series of NVIDIA cards have some changes to the implementation. They are not all the same and performance is also governed by clock speed and memory allocation even within a series. The CUDA cores can also be used independent of the NVENC implementation by the software.
                Ron Evans

                Threadripper 1920 stock clock 3.7, Gigabyte Designare X399 MB, 32G G.Skill 3200CL14, 500G M.2 NVME OS, 500G EVO 850 temp. 1T EVO 850 render, 6T Source, 2 x 1T NVME, MSI 1080Ti 11G , EVGA 850 G2, LG BLuray Burner, BM IP4K, WIN10 Pro, Shuttle Pro2

                ASUS PB328 monitor, BenQ BL2711U 4K preview monitor, EDIUS X, 9.5 WG, Vegas 18, Resolve Studio 17


                Cameras: GH5S, GH5, FDR-AX100, FDR-AX53, DJI OSMO Pocket, Atomos Ninja V x 2

                Comment


                • #9
                  Originally posted by Ron Evans View Post
                  Sorry Dave I disagree, a codec is a codec defined by a standard. How it is implement is something for each company to decide. I did not suggest clock and cores in the way you took it. Read again. If you do not have QS or NVENC ( which is fixed by hardware I agree ) then the decode or encode will be governed by the software architecture and the speed of the CPU ( or GPU if it uses that ). IF the software is single threaded it will want the fastest single core speed it can get. In my simple example the CPU is placing one piece of the puzzle at a time. So the faster it can do that the quicker it will finish. We know EDIUS likes fast clocks. I do not think this is up for discussion and why Intel is the current preferred CPU regardless of QS as traditionally Intel has faster clocks and has been easier to overclock too. It is why Intel has been the leader for game PC's. As to NVENC, each series of NVIDIA cards have some changes to the implementation. They are not all the same and performance is also governed by clock speed and memory allocation even within a series. The CUDA cores can also be used independent of the NVENC implementation by the software.
                  Ron, there's nothing to disagree with, everything I've said about codecs is exactly how it is.

                  Your understanding of NVENC is also wrong. NVENC doesn't always change with a series of cards. From the GTX 1660 through to the RTX 2080 ti, these all use 6th gen Turing encoders, which are all the same. Also, NVENC performance within the same generation of NVENC encoder, Turing as an example, doesn't change either regardless of the cards' clock or memory.

                  When you say "The CUDA cores 'CAN ALSO' be used independent of the NVENC implementation by the software". There's no 'CAN ALSO' about it, they are two different things and are always independent.

                  Anyway Ron. I was just correcting you on something that you'd said that was wrong and was conflicting with something that I'd said about codecs which was right. I'm just trying to avoid misinformation. If you want to disagree with industry fact then that's up to you, just expect to be corrected when you're wrong.
                  "There's only one thing more powerful than knowledge. The free sharing of it"

                  If you don't know the difference between Azimuth and Asimov, then either your tapes sound bad and your Robot is very dangerous. Kill all humans...... Or your tape deck won't harm a human, and your Robot's tracking and stereo imagining is spot on.

                  Is your Robot three laws safe?

                  Comment


                  • #10
                    Please take that this general discussion to the Lounge. The OP asked if there was anything specific he might do with his setup. Please restrict your posts to that question.
                    1: 3970X Threadripper, Asus ROG Strix TR40 E Gaming, G. Skill Trident Z Neo 128G DDR4 3600, EVGA GeForce RTX 2080Ti, Samsung 970 EVO M.2 1T, Intel 660P M.2 2T (2), Seagate Ironwolf NAS 12T, Enermax TR4 360 AIO, Lian Li 011 DXL, AJA Kona 4, Asus ROG Thor 1200

                    2: i7 6950X OC to 4.5GHz, ASUS RAMPAGE V EDITION 10, Corsair Dominator Platinum 64G DDR4 2800, SAMSUNG 950 PRO M.2 512G, GeForce GTX 1080ti SC Black, Corsair AX1200i, Phanteks Luxe, 16T RAID HGST Deskstar NAS 4T, Corsair H115i AIO, BM IP 4K

                    Comment

                    Working...
                    X