So I do not know how to code very well and im stuck at a problem with loading model onto GPU using TensorRT. this is a object detection model and im just trying to make it run faster. The script works fine when using Cuda in Pytorch format but when i use engine format after exporting pytorch to engine for tensorRT these errors occur.
Thanks to anyone that has some tips in advance
[04/05/2023-19:41:54] [TRT] [I] [MemUsageChange] Init CUDA: CPU +499, GPU +0, now: CPU 11536, GPU 1240 (MiB) [04/05/2023-19:41:54] [TRT] [I] Loaded engine size: 5 MiB [04/05/2023-19:41:55] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +1186, GPU +408, now: CPU 12783, GPU 1654 (MiB) [04/05/2023-19:41:55] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +0, now: CPU 0, GPU 0 (MiB) [04/05/2023-19:41:55] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 12776, GPU 1654 (MiB) [04/05/2023-19:41:55] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +0, now: CPU 0, GPU 0 (MiB) Adding AutoShape... (1376, 576, 2064, 864) Traceback (most recent call last): File "c:UsersHALLÅÅDocumentsCHEATSscriptsmain copy.py", line 150, in df= model(screenshot, size=640).pandas().xyxy[0] File "C:UsersHALLÅÅDocumentsCHEATScheats-envlibsite-packages orch nmodulesmodule.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:UsersHALLÅÅDocumentsCHEATScheats-envlibsite-packages orchutils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:UsersHALLÅÅDocumentsCHEATSyolov5modelscommon.py", line 704, in forward y = self.model(x, augment=augment) # forward File "C:UsersHALLÅÅDocumentsCHEATScheats-envlibsite-packages orch nmodulesmodule.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:UsersHALLÅÅDocumentsCHEATSyolov5modelscommon.py", line 536, in forward assert im.shape == s, f"input size {im.shape} { > if self.dynamic else not equal to } max model size {s}" AssertionError: input size torch.Size([1, 3, 288, 640]) not equal to max model size (1, 3, 640, 640)