aliensmn commited on
Commit
afaf90f
·
verified ·
1 Parent(s): 5fb70f6

Mirror from https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ assets/node_v3.png filter=lfs diff=lfs merge=lfs -text
.github/FUNDING.yml ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ github: yuvraj108c
2
+ custom: ["https://paypal.me/yuvraj108c", "https://buymeacoffee.com/yuvraj108cz"]
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ __pycache__
LICENSE ADDED
@@ -0,0 +1,437 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Attribution-NonCommercial-ShareAlike 4.0 International
2
+
3
+ =======================================================================
4
+
5
+ Creative Commons Corporation ("Creative Commons") is not a law firm and
6
+ does not provide legal services or legal advice. Distribution of
7
+ Creative Commons public licenses does not create a lawyer-client or
8
+ other relationship. Creative Commons makes its licenses and related
9
+ information available on an "as-is" basis. Creative Commons gives no
10
+ warranties regarding its licenses, any material licensed under their
11
+ terms and conditions, or any related information. Creative Commons
12
+ disclaims all liability for damages resulting from their use to the
13
+ fullest extent possible.
14
+
15
+ Using Creative Commons Public Licenses
16
+
17
+ Creative Commons public licenses provide a standard set of terms and
18
+ conditions that creators and other rights holders may use to share
19
+ original works of authorship and other material subject to copyright
20
+ and certain other rights specified in the public license below. The
21
+ following considerations are for informational purposes only, are not
22
+ exhaustive, and do not form part of our licenses.
23
+
24
+ Considerations for licensors: Our public licenses are
25
+ intended for use by those authorized to give the public
26
+ permission to use material in ways otherwise restricted by
27
+ copyright and certain other rights. Our licenses are
28
+ irrevocable. Licensors should read and understand the terms
29
+ and conditions of the license they choose before applying it.
30
+ Licensors should also secure all rights necessary before
31
+ applying our licenses so that the public can reuse the
32
+ material as expected. Licensors should clearly mark any
33
+ material not subject to the license. This includes other CC-
34
+ licensed material, or material used under an exception or
35
+ limitation to copyright. More considerations for licensors:
36
+ wiki.creativecommons.org/Considerations_for_licensors
37
+
38
+ Considerations for the public: By using one of our public
39
+ licenses, a licensor grants the public permission to use the
40
+ licensed material under specified terms and conditions. If
41
+ the licensor's permission is not necessary for any reason--for
42
+ example, because of any applicable exception or limitation to
43
+ copyright--then that use is not regulated by the license. Our
44
+ licenses grant only permissions under copyright and certain
45
+ other rights that a licensor has authority to grant. Use of
46
+ the licensed material may still be restricted for other
47
+ reasons, including because others have copyright or other
48
+ rights in the material. A licensor may make special requests,
49
+ such as asking that all changes be marked or described.
50
+ Although not required by our licenses, you are encouraged to
51
+ respect those requests where reasonable. More considerations
52
+ for the public:
53
+ wiki.creativecommons.org/Considerations_for_licensees
54
+
55
+ =======================================================================
56
+
57
+ Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
58
+ Public License
59
+
60
+ By exercising the Licensed Rights (defined below), You accept and agree
61
+ to be bound by the terms and conditions of this Creative Commons
62
+ Attribution-NonCommercial-ShareAlike 4.0 International Public License
63
+ ("Public License"). To the extent this Public License may be
64
+ interpreted as a contract, You are granted the Licensed Rights in
65
+ consideration of Your acceptance of these terms and conditions, and the
66
+ Licensor grants You such rights in consideration of benefits the
67
+ Licensor receives from making the Licensed Material available under
68
+ these terms and conditions.
69
+
70
+
71
+ Section 1 -- Definitions.
72
+
73
+ a. Adapted Material means material subject to Copyright and Similar
74
+ Rights that is derived from or based upon the Licensed Material
75
+ and in which the Licensed Material is translated, altered,
76
+ arranged, transformed, or otherwise modified in a manner requiring
77
+ permission under the Copyright and Similar Rights held by the
78
+ Licensor. For purposes of this Public License, where the Licensed
79
+ Material is a musical work, performance, or sound recording,
80
+ Adapted Material is always produced where the Licensed Material is
81
+ synched in timed relation with a moving image.
82
+
83
+ b. Adapter's License means the license You apply to Your Copyright
84
+ and Similar Rights in Your contributions to Adapted Material in
85
+ accordance with the terms and conditions of this Public License.
86
+
87
+ c. BY-NC-SA Compatible License means a license listed at
88
+ creativecommons.org/compatiblelicenses, approved by Creative
89
+ Commons as essentially the equivalent of this Public License.
90
+
91
+ d. Copyright and Similar Rights means copyright and/or similar rights
92
+ closely related to copyright including, without limitation,
93
+ performance, broadcast, sound recording, and Sui Generis Database
94
+ Rights, without regard to how the rights are labeled or
95
+ categorized. For purposes of this Public License, the rights
96
+ specified in Section 2(b)(1)-(2) are not Copyright and Similar
97
+ Rights.
98
+
99
+ e. Effective Technological Measures means those measures that, in the
100
+ absence of proper authority, may not be circumvented under laws
101
+ fulfilling obligations under Article 11 of the WIPO Copyright
102
+ Treaty adopted on December 20, 1996, and/or similar international
103
+ agreements.
104
+
105
+ f. Exceptions and Limitations means fair use, fair dealing, and/or
106
+ any other exception or limitation to Copyright and Similar Rights
107
+ that applies to Your use of the Licensed Material.
108
+
109
+ g. License Elements means the license attributes listed in the name
110
+ of a Creative Commons Public License. The License Elements of this
111
+ Public License are Attribution, NonCommercial, and ShareAlike.
112
+
113
+ h. Licensed Material means the artistic or literary work, database,
114
+ or other material to which the Licensor applied this Public
115
+ License.
116
+
117
+ i. Licensed Rights means the rights granted to You subject to the
118
+ terms and conditions of this Public License, which are limited to
119
+ all Copyright and Similar Rights that apply to Your use of the
120
+ Licensed Material and that the Licensor has authority to license.
121
+
122
+ j. Licensor means the individual(s) or entity(ies) granting rights
123
+ under this Public License.
124
+
125
+ k. NonCommercial means not primarily intended for or directed towards
126
+ commercial advantage or monetary compensation. For purposes of
127
+ this Public License, the exchange of the Licensed Material for
128
+ other material subject to Copyright and Similar Rights by digital
129
+ file-sharing or similar means is NonCommercial provided there is
130
+ no payment of monetary compensation in connection with the
131
+ exchange.
132
+
133
+ l. Share means to provide material to the public by any means or
134
+ process that requires permission under the Licensed Rights, such
135
+ as reproduction, public display, public performance, distribution,
136
+ dissemination, communication, or importation, and to make material
137
+ available to the public including in ways that members of the
138
+ public may access the material from a place and at a time
139
+ individually chosen by them.
140
+
141
+ m. Sui Generis Database Rights means rights other than copyright
142
+ resulting from Directive 96/9/EC of the European Parliament and of
143
+ the Council of 11 March 1996 on the legal protection of databases,
144
+ as amended and/or succeeded, as well as other essentially
145
+ equivalent rights anywhere in the world.
146
+
147
+ n. You means the individual or entity exercising the Licensed Rights
148
+ under this Public License. Your has a corresponding meaning.
149
+
150
+
151
+ Section 2 -- Scope.
152
+
153
+ a. License grant.
154
+
155
+ 1. Subject to the terms and conditions of this Public License,
156
+ the Licensor hereby grants You a worldwide, royalty-free,
157
+ non-sublicensable, non-exclusive, irrevocable license to
158
+ exercise the Licensed Rights in the Licensed Material to:
159
+
160
+ a. reproduce and Share the Licensed Material, in whole or
161
+ in part, for NonCommercial purposes only; and
162
+
163
+ b. produce, reproduce, and Share Adapted Material for
164
+ NonCommercial purposes only.
165
+
166
+ 2. Exceptions and Limitations. For the avoidance of doubt, where
167
+ Exceptions and Limitations apply to Your use, this Public
168
+ License does not apply, and You do not need to comply with
169
+ its terms and conditions.
170
+
171
+ 3. Term. The term of this Public License is specified in Section
172
+ 6(a).
173
+
174
+ 4. Media and formats; technical modifications allowed. The
175
+ Licensor authorizes You to exercise the Licensed Rights in
176
+ all media and formats whether now known or hereafter created,
177
+ and to make technical modifications necessary to do so. The
178
+ Licensor waives and/or agrees not to assert any right or
179
+ authority to forbid You from making technical modifications
180
+ necessary to exercise the Licensed Rights, including
181
+ technical modifications necessary to circumvent Effective
182
+ Technological Measures. For purposes of this Public License,
183
+ simply making modifications authorized by this Section 2(a)
184
+ (4) never produces Adapted Material.
185
+
186
+ 5. Downstream recipients.
187
+
188
+ a. Offer from the Licensor -- Licensed Material. Every
189
+ recipient of the Licensed Material automatically
190
+ receives an offer from the Licensor to exercise the
191
+ Licensed Rights under the terms and conditions of this
192
+ Public License.
193
+
194
+ b. Additional offer from the Licensor -- Adapted Material.
195
+ Every recipient of Adapted Material from You
196
+ automatically receives an offer from the Licensor to
197
+ exercise the Licensed Rights in the Adapted Material
198
+ under the conditions of the Adapter's License You apply.
199
+
200
+ c. No downstream restrictions. You may not offer or impose
201
+ any additional or different terms or conditions on, or
202
+ apply any Effective Technological Measures to, the
203
+ Licensed Material if doing so restricts exercise of the
204
+ Licensed Rights by any recipient of the Licensed
205
+ Material.
206
+
207
+ 6. No endorsement. Nothing in this Public License constitutes or
208
+ may be construed as permission to assert or imply that You
209
+ are, or that Your use of the Licensed Material is, connected
210
+ with, or sponsored, endorsed, or granted official status by,
211
+ the Licensor or others designated to receive attribution as
212
+ provided in Section 3(a)(1)(A)(i).
213
+
214
+ b. Other rights.
215
+
216
+ 1. Moral rights, such as the right of integrity, are not
217
+ licensed under this Public License, nor are publicity,
218
+ privacy, and/or other similar personality rights; however, to
219
+ the extent possible, the Licensor waives and/or agrees not to
220
+ assert any such rights held by the Licensor to the limited
221
+ extent necessary to allow You to exercise the Licensed
222
+ Rights, but not otherwise.
223
+
224
+ 2. Patent and trademark rights are not licensed under this
225
+ Public License.
226
+
227
+ 3. To the extent possible, the Licensor waives any right to
228
+ collect royalties from You for the exercise of the Licensed
229
+ Rights, whether directly or through a collecting society
230
+ under any voluntary or waivable statutory or compulsory
231
+ licensing scheme. In all other cases the Licensor expressly
232
+ reserves any right to collect such royalties, including when
233
+ the Licensed Material is used other than for NonCommercial
234
+ purposes.
235
+
236
+
237
+ Section 3 -- License Conditions.
238
+
239
+ Your exercise of the Licensed Rights is expressly made subject to the
240
+ following conditions.
241
+
242
+ a. Attribution.
243
+
244
+ 1. If You Share the Licensed Material (including in modified
245
+ form), You must:
246
+
247
+ a. retain the following if it is supplied by the Licensor
248
+ with the Licensed Material:
249
+
250
+ i. identification of the creator(s) of the Licensed
251
+ Material and any others designated to receive
252
+ attribution, in any reasonable manner requested by
253
+ the Licensor (including by pseudonym if
254
+ designated);
255
+
256
+ ii. a copyright notice;
257
+
258
+ iii. a notice that refers to this Public License;
259
+
260
+ iv. a notice that refers to the disclaimer of
261
+ warranties;
262
+
263
+ v. a URI or hyperlink to the Licensed Material to the
264
+ extent reasonably practicable;
265
+
266
+ b. indicate if You modified the Licensed Material and
267
+ retain an indication of any previous modifications; and
268
+
269
+ c. indicate the Licensed Material is licensed under this
270
+ Public License, and include the text of, or the URI or
271
+ hyperlink to, this Public License.
272
+
273
+ 2. You may satisfy the conditions in Section 3(a)(1) in any
274
+ reasonable manner based on the medium, means, and context in
275
+ which You Share the Licensed Material. For example, it may be
276
+ reasonable to satisfy the conditions by providing a URI or
277
+ hyperlink to a resource that includes the required
278
+ information.
279
+ 3. If requested by the Licensor, You must remove any of the
280
+ information required by Section 3(a)(1)(A) to the extent
281
+ reasonably practicable.
282
+
283
+ b. ShareAlike.
284
+
285
+ In addition to the conditions in Section 3(a), if You Share
286
+ Adapted Material You produce, the following conditions also apply.
287
+
288
+ 1. The Adapter's License You apply must be a Creative Commons
289
+ license with the same License Elements, this version or
290
+ later, or a BY-NC-SA Compatible License.
291
+
292
+ 2. You must include the text of, or the URI or hyperlink to, the
293
+ Adapter's License You apply. You may satisfy this condition
294
+ in any reasonable manner based on the medium, means, and
295
+ context in which You Share Adapted Material.
296
+
297
+ 3. You may not offer or impose any additional or different terms
298
+ or conditions on, or apply any Effective Technological
299
+ Measures to, Adapted Material that restrict exercise of the
300
+ rights granted under the Adapter's License You apply.
301
+
302
+
303
+ Section 4 -- Sui Generis Database Rights.
304
+
305
+ Where the Licensed Rights include Sui Generis Database Rights that
306
+ apply to Your use of the Licensed Material:
307
+
308
+ a. for the avoidance of doubt, Section 2(a)(1) grants You the right
309
+ to extract, reuse, reproduce, and Share all or a substantial
310
+ portion of the contents of the database for NonCommercial purposes
311
+ only;
312
+
313
+ b. if You include all or a substantial portion of the database
314
+ contents in a database in which You have Sui Generis Database
315
+ Rights, then the database in which You have Sui Generis Database
316
+ Rights (but not its individual contents) is Adapted Material,
317
+ including for purposes of Section 3(b); and
318
+
319
+ c. You must comply with the conditions in Section 3(a) if You Share
320
+ all or a substantial portion of the contents of the database.
321
+
322
+ For the avoidance of doubt, this Section 4 supplements and does not
323
+ replace Your obligations under this Public License where the Licensed
324
+ Rights include other Copyright and Similar Rights.
325
+
326
+
327
+ Section 5 -- Disclaimer of Warranties and Limitation of Liability.
328
+
329
+ a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
330
+ EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
331
+ AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
332
+ ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
333
+ IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
334
+ WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
335
+ PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
336
+ ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
337
+ KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
338
+ ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
339
+
340
+ b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
341
+ TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
342
+ NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
343
+ INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
344
+ COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
345
+ USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
346
+ ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
347
+ DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
348
+ IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
349
+
350
+ c. The disclaimer of warranties and limitation of liability provided
351
+ above shall be interpreted in a manner that, to the extent
352
+ possible, most closely approximates an absolute disclaimer and
353
+ waiver of all liability.
354
+
355
+
356
+ Section 6 -- Term and Termination.
357
+
358
+ a. This Public License applies for the term of the Copyright and
359
+ Similar Rights licensed here. However, if You fail to comply with
360
+ this Public License, then Your rights under this Public License
361
+ terminate automatically.
362
+
363
+ b. Where Your right to use the Licensed Material has terminated under
364
+ Section 6(a), it reinstates:
365
+
366
+ 1. automatically as of the date the violation is cured, provided
367
+ it is cured within 30 days of Your discovery of the
368
+ violation; or
369
+
370
+ 2. upon express reinstatement by the Licensor.
371
+
372
+ For the avoidance of doubt, this Section 6(b) does not affect any
373
+ right the Licensor may have to seek remedies for Your violations
374
+ of this Public License.
375
+
376
+ c. For the avoidance of doubt, the Licensor may also offer the
377
+ Licensed Material under separate terms or conditions or stop
378
+ distributing the Licensed Material at any time; however, doing so
379
+ will not terminate this Public License.
380
+
381
+ d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
382
+ License.
383
+
384
+
385
+ Section 7 -- Other Terms and Conditions.
386
+
387
+ a. The Licensor shall not be bound by any additional or different
388
+ terms or conditions communicated by You unless expressly agreed.
389
+
390
+ b. Any arrangements, understandings, or agreements regarding the
391
+ Licensed Material not stated herein are separate from and
392
+ independent of the terms and conditions of this Public License.
393
+
394
+
395
+ Section 8 -- Interpretation.
396
+
397
+ a. For the avoidance of doubt, this Public License does not, and
398
+ shall not be interpreted to, reduce, limit, restrict, or impose
399
+ conditions on any use of the Licensed Material that could lawfully
400
+ be made without permission under this Public License.
401
+
402
+ b. To the extent possible, if any provision of this Public License is
403
+ deemed unenforceable, it shall be automatically reformed to the
404
+ minimum extent necessary to make it enforceable. If the provision
405
+ cannot be reformed, it shall be severed from this Public License
406
+ without affecting the enforceability of the remaining terms and
407
+ conditions.
408
+
409
+ c. No term or condition of this Public License will be waived and no
410
+ failure to comply consented to unless expressly agreed to by the
411
+ Licensor.
412
+
413
+ d. Nothing in this Public License constitutes or may be interpreted
414
+ as a limitation upon, or waiver of, any privileges and immunities
415
+ that apply to the Licensor or You, including from the legal
416
+ processes of any jurisdiction or authority.
417
+
418
+ =======================================================================
419
+
420
+ Creative Commons is not a party to its public
421
+ licenses. Notwithstanding, Creative Commons may elect to apply one of
422
+ its public licenses to material it publishes and in those instances
423
+ will be considered the “Licensor.” The text of the Creative Commons
424
+ public licenses is dedicated to the public domain under the CC0 Public
425
+ Domain Dedication. Except for the limited purpose of indicating that
426
+ material is shared under a Creative Commons public license or as
427
+ otherwise permitted by the Creative Commons policies published at
428
+ creativecommons.org/policies, Creative Commons does not authorize the
429
+ use of the trademark "Creative Commons" or any other trademark or logo
430
+ of Creative Commons without its prior written consent including,
431
+ without limitation, in connection with any unauthorized modifications
432
+ to any of its public licenses or any other arrangements,
433
+ understandings, or agreements concerning use of licensed material. For
434
+ the avoidance of doubt, this paragraph does not form part of the
435
+ public licenses.
436
+
437
+ Creative Commons may be contacted at creativecommons.org.
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+
3
+ # ComfyUI Upscaler TensorRT ⚡
4
+
5
+ [![python](https://img.shields.io/badge/python-3.10.12-green)](https://www.python.org/downloads/release/python-31012/)
6
+ [![cuda](https://img.shields.io/badge/cuda-12.7-green)](https://developer.nvidia.com/cuda-downloads)
7
+ [![trt](https://img.shields.io/badge/TRT-10.9-green)](https://developer.nvidia.com/tensorrt)
8
+ [![by-nc-sa/4.0](https://img.shields.io/badge/license-CC--BY--NC--SA--4.0-lightgrey)](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en)
9
+
10
+ </div>
11
+
12
+ This project provides a [Tensorrt](https://github.com/NVIDIA/TensorRT) implementation for fast image upscaling using models inside ComfyUI (2-4x faster)
13
+
14
+ <p align="center">
15
+ <img src="assets/node_v3.png" style="height: 400px" />
16
+ </p>
17
+
18
+ ## ⭐ Support
19
+ If you like my projects and wish to see updates and new features, please consider supporting me. It helps a lot!
20
+
21
+ [![ComfyUI-Depth-Anything-Tensorrt](https://img.shields.io/badge/ComfyUI--Depth--Anything--Tensorrt-blue?style=flat-square)](https://github.com/yuvraj108c/ComfyUI-Depth-Anything-Tensorrt)
22
+ [![ComfyUI-Upscaler-Tensorrt](https://img.shields.io/badge/ComfyUI--Upscaler--Tensorrt-blue?style=flat-square)](https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt)
23
+ [![ComfyUI-Dwpose-Tensorrt](https://img.shields.io/badge/ComfyUI--Dwpose--Tensorrt-blue?style=flat-square)](https://github.com/yuvraj108c/ComfyUI-Dwpose-Tensorrt)
24
+ [![ComfyUI-Rife-Tensorrt](https://img.shields.io/badge/ComfyUI--Rife--Tensorrt-blue?style=flat-square)](https://github.com/yuvraj108c/ComfyUI-Rife-Tensorrt)
25
+
26
+ [![ComfyUI-Whisper](https://img.shields.io/badge/ComfyUI--Whisper-gray?style=flat-square)](https://github.com/yuvraj108c/ComfyUI-Whisper)
27
+ [![ComfyUI_InvSR](https://img.shields.io/badge/ComfyUI__InvSR-gray?style=flat-square)](https://github.com/yuvraj108c/ComfyUI_InvSR)
28
+ [![ComfyUI-Thera](https://img.shields.io/badge/ComfyUI--Thera-gray?style=flat-square)](https://github.com/yuvraj108c/ComfyUI-Thera)
29
+ [![ComfyUI-Video-Depth-Anything](https://img.shields.io/badge/ComfyUI--Video--Depth--Anything-gray?style=flat-square)](https://github.com/yuvraj108c/ComfyUI-Video-Depth-Anything)
30
+ [![ComfyUI-PiperTTS](https://img.shields.io/badge/ComfyUI--PiperTTS-gray?style=flat-square)](https://github.com/yuvraj108c/ComfyUI-PiperTTS)
31
+
32
+ [![buy-me-coffees](https://i.imgur.com/3MDbAtw.png)](https://www.buymeacoffee.com/yuvraj108cZ)
33
+ [![paypal-donation](https://i.imgur.com/w5jjubk.png)](https://paypal.me/yuvraj108c)
34
+ ---
35
+
36
+ ## ⏱️ Performance
37
+
38
+ _Note: The following results were benchmarked on FP16 engines inside ComfyUI, using 100 identical frames_
39
+
40
+ | Device | Model | Input Resolution (WxH) | Output Resolution (WxH) | FPS |
41
+ | :----: | :-----------: | :--------------------: | :---------------------: | :-: |
42
+ | RTX5090 | 4x-UltraSharp | 512 x 512 | 2048 x 2048 | 12.7 |
43
+ | RTX5090 | 4x-UltraSharp | 1280 x 1280 | 5120 x 5120 | 2.0 |
44
+ | RTX4090 | 4x-UltraSharp | 512 x 512 | 2048 x 2048 | 6.7 |
45
+ | RTX4090 | 4x-UltraSharp | 1280 x 1280 | 5120 x 5120 | 1.1 |
46
+ | RTX3060 | 4x-UltraSharp | 512 x 512 | 2048 x 2048 | 2.2 |
47
+ | RTX3060 | 4x-UltraSharp | 1280 x 1280 | 5120 x 5120 | 0.35 |
48
+
49
+ ## 🚀 Installation
50
+ - Install via the manager
51
+ - Or, navigate to the `/ComfyUI/custom_nodes` directory
52
+
53
+ ```bash
54
+ git clone https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt.git
55
+ cd ./ComfyUI-Upscaler-Tensorrt
56
+ pip install -r requirements.txt
57
+ ```
58
+
59
+ ## 🛠️ Supported Models
60
+
61
+ - These upscaler models have been tested to work with Tensorrt. Onnx are available [here](https://huggingface.co/yuvraj108c/ComfyUI-Upscaler-Onnx/tree/main)
62
+ - The exported tensorrt models support dynamic image resolutions from 256x256 to 1280x1280 px (e.g 960x540, 512x512, 1280x720 etc).
63
+
64
+ - [4x-AnimeSharp](https://openmodeldb.info/models/4x-AnimeSharp)
65
+ - [4x-UltraSharp](https://openmodeldb.info/models/4x-UltraSharp)
66
+ - [4x-WTP-UDS-Esrgan](https://openmodeldb.info/models/4x-WTP-UDS-Esrgan)
67
+ - [4x_NMKD-Siax_200k](https://openmodeldb.info/models/4x-NMKD-Siax-CX)
68
+ - [4x_RealisticRescaler_100000_G](https://openmodeldb.info/models/4x-RealisticRescaler)
69
+ - [4x_foolhardy_Remacri](https://openmodeldb.info/models/4x-Remacri)
70
+ - [RealESRGAN_x4](https://openmodeldb.info/models/4x-realesrgan-x4plus)
71
+ - [4xNomos2_otf_esrgan](https://openmodeldb.info/models/4x-Nomos2-otf-esrgan)
72
+ - [4x-ClearRealityV1](https://openmodeldb.info/models/4x-ClearRealityV1)
73
+ - [4x_UniversalUpscalerV2-Neutral_115000_swaG](https://openmodeldb.info/models/4x-UniversalUpscalerV2-Neutral)
74
+ - [4x-UltraSharpV2_Lite](https://huggingface.co/Kim2091/UltraSharpV2)
75
+
76
+ ## ☀️ Usage
77
+
78
+ - Load [example workflow](assets/tensorrt_upscaling_workflow.json)
79
+ - Choose the appropriate model from the dropdown
80
+ - The tensorrt engine will be built automatically
81
+ - Load an image of resolution between 256-1280px
82
+ - Set `resize_to` to resize the upscaled images to fixed resolutions
83
+
84
+ ## 🔧 Custom Models
85
+ - To export other ESRGAN models, you'll have to build the onnx model first, using [export_onnx.py](scripts/export_onnx.py)
86
+ - Place the onnx model in `/ComfyUI/models/onnx/YOUR_MODEL.onnx`
87
+ - Then, add your model to this list [load_upscaler_config.json](load_upscaler_config.json)
88
+ - Finally, run the same workflow and choose your model
89
+ - If you've tested another working tensorrt model, let me know to add it officially to this node
90
+
91
+ ## 🚨 Updates
92
+ ### 27 Auguest 2025
93
+ - Support 4x-UltraSharpV2_Lite, 4x_UniversalUpscalerV2-Neutral_115000_swaG, 4x-ClearRealityV1
94
+ - Load models from config [PR#57](https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt/pull/57)
95
+
96
+ ### 30 April 2025
97
+ - Merge https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt/pull/48 by @BiiirdPrograms to fix soft-lock by raising an error when input image dimensions unsupported
98
+ ### 4 March 2025 (breaking)
99
+ - Automatic tensorrt engines are built from the workflow itself, to simplify the process for non-technical people
100
+ - Separate model loading and tensorrt processing into different nodes
101
+ - Optimise post processing
102
+ - Update onnx export script
103
+
104
+ ## ⚠️ Known issues
105
+
106
+ - If you upgrade tensorrt version, you'll have to rebuild the engines
107
+ - Only models with ESRGAN architecture are currently working
108
+ - High ram usage when exporting `.pth` to `.onnx`
109
+
110
+ ## 🤖 Environment tested
111
+
112
+ - Ubuntu 22.04 LTS, Cuda 12.4, Tensorrt 10.8, Python 3.10, H100 GPU
113
+ - Windows 11
114
+
115
+ ## 👏 Credits
116
+
117
+ - [NVIDIA/Stable-Diffusion-WebUI-TensorRT](https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT)
118
+ - [comfyanonymous/ComfyUI](https://github.com/comfyanonymous/ComfyUI)
119
+
120
+ ## License
121
+
122
+ [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
__init__.py ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import folder_paths
3
+ import numpy as np
4
+ import torch
5
+ from comfy.utils import ProgressBar
6
+ from .trt_utilities import Engine
7
+ from .utilities import download_file, ColoredLogger, get_final_resolutions
8
+ import comfy.model_management as mm
9
+ import time
10
+ import tensorrt
11
+ import json # <--- Import json module
12
+
13
+ logger = ColoredLogger("ComfyUI-Upscaler-Tensorrt")
14
+
15
+ IMAGE_DIM_MIN = 256
16
+ IMAGE_DIM_OPT = 512
17
+ IMAGE_DIM_MAX = 1280
18
+
19
+ # --- Function to load configuration ---
20
+ def load_node_config(config_filename="load_upscaler_config.json"):
21
+ """Loads node configuration from a JSON file."""
22
+ current_dir = os.path.dirname(__file__)
23
+ config_path = os.path.join(current_dir, config_filename)
24
+
25
+ default_config = { # Fallback in case file is missing or corrupt
26
+ "model": {
27
+ "options": ["4x-UltraSharp"],
28
+ "default": "4x-UltraSharp",
29
+ "tooltip": "Default model (fallback from code)"
30
+ },
31
+ "precision": {
32
+ "options": ["fp16", "fp32"],
33
+ "default": "fp16",
34
+ "tooltip": "Default precision (fallback from code)"
35
+ }
36
+ }
37
+
38
+ try:
39
+ with open(config_path, 'r') as f:
40
+ config = json.load(f)
41
+ logger.info(f"Successfully loaded configuration from {config_filename}")
42
+ return config
43
+ except FileNotFoundError:
44
+ logger.warning(f"Configuration file '{config_path}' not found. Using default fallback configuration.")
45
+ return default_config
46
+ except json.JSONDecodeError:
47
+ logger.error(f"Error decoding JSON from '{config_path}'. Using default fallback configuration.")
48
+ return default_config
49
+ except Exception as e:
50
+ logger.error(f"An unexpected error occurred while loading '{config_path}': {e}. Using default fallback.")
51
+ return default_config
52
+
53
+ # --- Load the configuration once when the module is imported ---
54
+ LOAD_UPSCALER_NODE_CONFIG = load_node_config()
55
+
56
+
57
+ class UpscalerTensorrt:
58
+ @classmethod
59
+ def INPUT_TYPES(s):
60
+ return {
61
+ "required": {
62
+ "images": ("IMAGE", {"tooltip": f"Images to be upscaled. Resolution must be between {IMAGE_DIM_MIN} and {IMAGE_DIM_MAX} px"}),
63
+ "upscaler_trt_model": ("UPSCALER_TRT_MODEL", {"tooltip": "Tensorrt model built and loaded"}),
64
+ "resize_to": (["none", "HD", "FHD", "2k", "4k", "2x", "3x"],{"tooltip": "Resize the upscaled image to fixed resolutions, optional"}),
65
+ }
66
+ }
67
+ RETURN_NAMES = ("IMAGE",)
68
+ RETURN_TYPES = ("IMAGE",)
69
+ FUNCTION = "upscaler_tensorrt"
70
+ CATEGORY = "tensorrt"
71
+ DESCRIPTION = "Upscale images with tensorrt"
72
+
73
+ def upscaler_tensorrt(self, images, upscaler_trt_model, resize_to):
74
+ images_bchw = images.permute(0, 3, 1, 2)
75
+ B, C, H, W = images_bchw.shape
76
+
77
+ for dim in (H, W):
78
+ if dim > IMAGE_DIM_MAX or dim < IMAGE_DIM_MIN:
79
+ raise ValueError(f"Input image dimensions fall outside of the supported range: {IMAGE_DIM_MIN} to {IMAGE_DIM_MAX} px!\nImage dimensions: {W}px by {H}px")
80
+
81
+ final_width, final_height = get_final_resolutions(W, H, resize_to)
82
+ logger.info(f"Upscaling {B} images from H:{H}, W:{W} to H:{H*4}, W:{W*4} | Final resolution: H:{final_height}, W:{final_width} | resize_to: {resize_to}")
83
+
84
+ shape_dict = {
85
+ "input": {"shape": (1, 3, H, W)},
86
+ "output": {"shape": (1, 3, H*4, W*4)},
87
+ }
88
+ upscaler_trt_model.activate()
89
+ upscaler_trt_model.allocate_buffers(shape_dict=shape_dict)
90
+
91
+ cudaStream = torch.cuda.current_stream().cuda_stream
92
+ pbar = ProgressBar(B)
93
+ images_list = list(torch.split(images_bchw, split_size_or_sections=1))
94
+
95
+ upscaled_frames = torch.empty((B, C, final_height, final_width), dtype=torch.float32, device=mm.intermediate_device())
96
+ must_resize = W*4 != final_width or H*4 != final_height
97
+
98
+ for i, img in enumerate(images_list):
99
+ result = upscaler_trt_model.infer({"input": img}, cudaStream)
100
+ result = result["output"]
101
+
102
+ if must_resize:
103
+ result = torch.nn.functional.interpolate(
104
+ result,
105
+ size=(final_height, final_width),
106
+ mode='bicubic',
107
+ antialias=True
108
+ )
109
+ upscaled_frames[i] = result.to(mm.intermediate_device())
110
+ pbar.update(1)
111
+
112
+ output = upscaled_frames.permute(0, 2, 3, 1)
113
+ upscaler_trt_model.reset()
114
+ mm.soft_empty_cache()
115
+
116
+ logger.info(f"Output shape: {output.shape}")
117
+ return (output,)
118
+
119
+ class LoadUpscalerTensorrtModel:
120
+ @classmethod
121
+ def INPUT_TYPES(cls): # Changed 's' to 'cls' for convention
122
+ # Use the pre-loaded configuration
123
+ model_config = LOAD_UPSCALER_NODE_CONFIG.get("model", {})
124
+ precision_config = LOAD_UPSCALER_NODE_CONFIG.get("precision", {})
125
+
126
+ # Provide sensible defaults if keys are missing in the config (though load_node_config handles this broadly)
127
+ model_options = model_config.get("options", ["4x-UltraSharp"])
128
+ model_default = model_config.get("default", "4x-UltraSharp")
129
+ model_tooltip = model_config.get("tooltip", "Select a model.")
130
+
131
+ precision_options = precision_config.get("options", ["fp16", "fp32"])
132
+ precision_default = precision_config.get("default", "fp16")
133
+ precision_tooltip = precision_config.get("tooltip", "Select precision.")
134
+
135
+ return {
136
+ "required": {
137
+ "model": (model_options, {"default": model_default, "tooltip": model_tooltip}),
138
+ "precision": (precision_options, {"default": precision_default, "tooltip": precision_tooltip}),
139
+ }
140
+ }
141
+
142
+ RETURN_NAMES = ("upscaler_trt_model",)
143
+ RETURN_TYPES = ("UPSCALER_TRT_MODEL",)
144
+ # FUNCTION = "main" # This was duplicated, removing
145
+ CATEGORY = "tensorrt"
146
+ DESCRIPTION = "Load tensorrt models, they will be built automatically if not found."
147
+ FUNCTION = "load_upscaler_tensorrt_model" # This is the correct one
148
+
149
+ def load_upscaler_tensorrt_model(self, model, precision):
150
+ tensorrt_models_dir = os.path.join(folder_paths.models_dir, "tensorrt", "upscaler")
151
+ onnx_models_dir = os.path.join(folder_paths.models_dir, "onnx")
152
+
153
+ os.makedirs(tensorrt_models_dir, exist_ok=True)
154
+ os.makedirs(onnx_models_dir, exist_ok=True)
155
+
156
+ onnx_model_path = os.path.join(onnx_models_dir, f"{model}.onnx")
157
+
158
+ engine_channel = 3
159
+ engine_min_batch, engine_opt_batch, engine_max_batch = 1, 1, 1
160
+ engine_min_h, engine_opt_h, engine_max_h = IMAGE_DIM_MIN, IMAGE_DIM_OPT, IMAGE_DIM_MAX
161
+ engine_min_w, engine_opt_w, engine_max_w = IMAGE_DIM_MIN, IMAGE_DIM_OPT, IMAGE_DIM_MAX
162
+ tensorrt_model_path = os.path.join(tensorrt_models_dir, f"{model}_{precision}_{engine_min_batch}x{engine_channel}x{engine_min_h}x{engine_min_w}_{engine_opt_batch}x{engine_channel}x{engine_opt_h}x{engine_opt_w}_{engine_max_batch}x{engine_channel}x{engine_max_h}x{engine_max_w}_{tensorrt.__version__}.trt")
163
+
164
+ if not os.path.exists(tensorrt_model_path):
165
+ if not os.path.exists(onnx_model_path):
166
+ onnx_model_download_url = f"https://huggingface.co/yuvraj108c/ComfyUI-Upscaler-Onnx/resolve/main/{model}.onnx"
167
+ logger.info(f"Downloading {onnx_model_download_url}")
168
+ download_file(url=onnx_model_download_url, save_path=onnx_model_path)
169
+ else:
170
+ logger.info(f"Onnx model found at: {onnx_model_path}")
171
+
172
+ logger.info(f"Building TensorRT engine for {onnx_model_path}: {tensorrt_model_path}")
173
+ mm.soft_empty_cache()
174
+ s = time.time()
175
+ engine = Engine(tensorrt_model_path)
176
+ engine.build(
177
+ onnx_path=onnx_model_path,
178
+ fp16= True if precision == "fp16" else False,
179
+ input_profile=[
180
+ {"input": [(engine_min_batch,engine_channel,engine_min_h,engine_min_w), (engine_opt_batch,engine_channel,engine_opt_h,engine_min_w), (engine_max_batch,engine_channel,engine_max_h,engine_max_w)]},
181
+ ],
182
+ )
183
+ e = time.time()
184
+ logger.info(f"Time taken to build: {(e-s)} seconds")
185
+
186
+ logger.info(f"Loading TensorRT engine: {tensorrt_model_path}")
187
+ mm.soft_empty_cache()
188
+ engine = Engine(tensorrt_model_path)
189
+ engine.load()
190
+
191
+ return (engine,)
192
+
193
+ NODE_CLASS_MAPPINGS = {
194
+ "UpscalerTensorrt": UpscalerTensorrt,
195
+ "LoadUpscalerTensorrtModel": LoadUpscalerTensorrtModel,
196
+ }
197
+
198
+ NODE_DISPLAY_NAME_MAPPINGS = {
199
+ "UpscalerTensorrt": "Upscaler Tensorrt ⚡",
200
+ "LoadUpscalerTensorrtModel": "Load Upscale Tensorrt Model",
201
+ }
202
+
203
+ __all__ = ['NODE_CLASS_MAPPINGS', 'NODE_DISPLAY_NAME_MAPPINGS']
assets/node_v3.png ADDED

Git LFS Details

  • SHA256: c48ec89874a37c0a31b7621bb528349bd5b6dfea87b38023b23f3f264868d5e6
  • Pointer size: 131 Bytes
  • Size of remote file: 291 kB
assets/tensorrt_upscaling_workflow.json ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "last_node_id": 13,
3
+ "last_link_id": 16,
4
+ "nodes": [
5
+ {
6
+ "id": 5,
7
+ "type": "LoadImage",
8
+ "pos": [
9
+ -301.260986328125,
10
+ 16.96155548095703
11
+ ],
12
+ "size": [
13
+ 315,
14
+ 314
15
+ ],
16
+ "flags": {},
17
+ "order": 0,
18
+ "mode": 0,
19
+ "inputs": [],
20
+ "outputs": [
21
+ {
22
+ "name": "IMAGE",
23
+ "type": "IMAGE",
24
+ "links": [
25
+ 16
26
+ ]
27
+ },
28
+ {
29
+ "name": "MASK",
30
+ "type": "MASK",
31
+ "links": null
32
+ }
33
+ ],
34
+ "properties": {
35
+ "Node name for S&R": "LoadImage"
36
+ },
37
+ "widgets_values": [
38
+ "example.png",
39
+ "image"
40
+ ]
41
+ },
42
+ {
43
+ "id": 2,
44
+ "type": "LoadUpscalerTensorrtModel",
45
+ "pos": [
46
+ -297.7986755371094,
47
+ 450.532958984375
48
+ ],
49
+ "size": [
50
+ 315,
51
+ 82
52
+ ],
53
+ "flags": {},
54
+ "order": 1,
55
+ "mode": 0,
56
+ "inputs": [],
57
+ "outputs": [
58
+ {
59
+ "name": "upscaler_trt_model",
60
+ "type": "UPSCALER_TRT_MODEL",
61
+ "links": [
62
+ 1
63
+ ],
64
+ "slot_index": 0
65
+ }
66
+ ],
67
+ "properties": {
68
+ "Node name for S&R": "LoadUpscalerTensorrtModel"
69
+ },
70
+ "widgets_values": [
71
+ "4x-UltraSharp",
72
+ "fp16"
73
+ ]
74
+ },
75
+ {
76
+ "id": 13,
77
+ "type": "PreviewImage",
78
+ "pos": [
79
+ 519.0885009765625,
80
+ -51.63800048828125
81
+ ],
82
+ "size": [
83
+ 706.2752075195312,
84
+ 756.0552978515625
85
+ ],
86
+ "flags": {},
87
+ "order": 3,
88
+ "mode": 0,
89
+ "inputs": [
90
+ {
91
+ "name": "images",
92
+ "type": "IMAGE",
93
+ "link": 14
94
+ }
95
+ ],
96
+ "outputs": [],
97
+ "properties": {
98
+ "Node name for S&R": "PreviewImage"
99
+ },
100
+ "widgets_values": []
101
+ },
102
+ {
103
+ "id": 3,
104
+ "type": "UpscalerTensorrt",
105
+ "pos": [
106
+ 111.23614501953125,
107
+ 352.1241760253906
108
+ ],
109
+ "size": [
110
+ 315,
111
+ 78
112
+ ],
113
+ "flags": {},
114
+ "order": 2,
115
+ "mode": 0,
116
+ "inputs": [
117
+ {
118
+ "name": "images",
119
+ "type": "IMAGE",
120
+ "link": 16
121
+ },
122
+ {
123
+ "name": "upscaler_trt_model",
124
+ "type": "UPSCALER_TRT_MODEL",
125
+ "link": 1
126
+ }
127
+ ],
128
+ "outputs": [
129
+ {
130
+ "name": "IMAGE",
131
+ "type": "IMAGE",
132
+ "links": [
133
+ 14
134
+ ],
135
+ "slot_index": 0
136
+ }
137
+ ],
138
+ "properties": {
139
+ "Node name for S&R": "UpscalerTensorrt"
140
+ },
141
+ "widgets_values": [
142
+ "none"
143
+ ]
144
+ }
145
+ ],
146
+ "links": [
147
+ [
148
+ 1,
149
+ 2,
150
+ 0,
151
+ 3,
152
+ 1,
153
+ "UPSCALER_TRT_MODEL"
154
+ ],
155
+ [
156
+ 14,
157
+ 3,
158
+ 0,
159
+ 13,
160
+ 0,
161
+ "IMAGE"
162
+ ],
163
+ [
164
+ 16,
165
+ 5,
166
+ 0,
167
+ 3,
168
+ 0,
169
+ "IMAGE"
170
+ ]
171
+ ],
172
+ "groups": [],
173
+ "config": {},
174
+ "extra": {
175
+ "ds": {
176
+ "scale": 1,
177
+ "offset": [
178
+ 500.6130318234485,
179
+ 102.83383472565859
180
+ ]
181
+ }
182
+ },
183
+ "version": 0.4
184
+ }
load_upscaler_config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model": {
3
+ "options": [
4
+ "4x-AnimeSharp",
5
+ "4x-UltraSharp",
6
+ "4x-WTP-UDS-Esrgan",
7
+ "4x_NMKD-Siax_200k",
8
+ "4x_RealisticRescaler_100000_G",
9
+ "4x_foolhardy_Remacri",
10
+ "RealESRGAN_x4",
11
+ "4xNomos2_otf_esrgan",
12
+ "4x_UniversalUpscalerV2-Neutral_115000_swaG",
13
+ "4x-ClearRealityV1",
14
+ "4x-UltraSharpV2_Lite"
15
+ ],
16
+ "default": "4x-UltraSharp",
17
+ "tooltip": "These models have been tested with tensorrt. Loaded from config."
18
+ },
19
+ "precision": {
20
+ "options": ["fp16", "fp32"],
21
+ "default": "fp16",
22
+ "tooltip": "Precision to build the tensorrt engines. Loaded from config."
23
+ }
24
+ }
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ tensorrt<=10.12.0.36
2
+ polygraphy
3
+ requests
scripts/export_onnx.py ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # download the upscale models & place inside models/upscaler_models
2
+ # edit model paths accordingly
3
+
4
+ import torch
5
+ import folder_paths
6
+ from spandrel import ModelLoader, ImageModelDescriptor
7
+
8
+ model_name = "4xNomos2_otf_esrgan.pth"
9
+ onnx_save_path = "./4xNomos2_otf_esrgan.onnx"
10
+
11
+ model_path = folder_paths.get_full_path_or_raise("upscale_models", model_name)
12
+ model = ModelLoader().load_from_file(model_path).model.eval().cuda()
13
+
14
+ # Check dynamic shapes for esrgan 4x model
15
+ def supports_dynamic_shapes_esrgan(model, scale=4):
16
+
17
+ input_shapes = [
18
+ (1, 3, 64, 64),
19
+ (1, 3, 128, 128),
20
+ (1, 3, 256, 192),
21
+ (1, 3, 512, 256),
22
+ (1, 3, 512, 512)
23
+ ]
24
+
25
+ all_passed = True
26
+
27
+ with torch.no_grad():
28
+ for shape in input_shapes:
29
+ try:
30
+ dummy_input = torch.randn(*shape).cuda()
31
+ output = model(dummy_input)
32
+
33
+ expected_h = shape[2] * scale
34
+ expected_w = shape[3] * scale
35
+
36
+ assert output.shape[0] == shape[0], "Batch size mismatch"
37
+ assert output.shape[1] == shape[1], "Channel mismatch"
38
+ assert output.shape[2] == expected_h, f"Height mismatch: expected {expected_h}, got {output.shape[2]}"
39
+ assert output.shape[3] == expected_w, f"Width mismatch: expected {expected_w}, got {output.shape[3]}"
40
+
41
+ print(f"Success: input {shape} → output {output.shape}")
42
+ except Exception as e:
43
+ all_passed = False
44
+ print(f"Failure: input {shape} → error: {e}")
45
+ torch.cuda.empty_cache()
46
+
47
+ if all_passed: print(f"Success: Dynamic shapes supported.")
48
+ if not all_passed: print(f"Failure: Dynamic shapes NOT supported.")
49
+ return all_passed
50
+
51
+ # Use smaller dummy input if model supports
52
+ if supports_dynamic_shapes_esrgan(model):
53
+ shape = (1, 3, 64, 64)
54
+ print(f"Using {shape} input (less VRAM usage)")
55
+ else:
56
+ shape = (1, 3, 512, 512)
57
+ print(f"Using {shape} input (large VRAM usage)")
58
+
59
+ x = torch.rand(*shape).cuda()
60
+
61
+ dynamic_axes = {
62
+ "input": {0: "batch_size", 2: "width", 3: "height"},
63
+ "output": {0: "batch_size", 2: "width", 3: "height"},
64
+ }
65
+
66
+ with torch.no_grad():
67
+ torch.onnx.export(
68
+ model,
69
+ x,
70
+ onnx_save_path,
71
+ verbose=True,
72
+ input_names=['input'],
73
+ output_names=['output'],
74
+ opset_version=17,
75
+ export_params=True,
76
+ dynamic_axes=dynamic_axes,
77
+ )
78
+
79
+ print("Saved onnx to:", onnx_save_path)
scripts/export_trt.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import time
3
+ from utilities import Engine
4
+
5
+ def export_trt(trt_path=None, onnx_path=None, use_fp16=True):
6
+ if trt_path is None:
7
+ trt_path = input("Enter the path to save the TensorRT engine (e.g ./realesrgan.engine): ")
8
+ if onnx_path is None:
9
+ onnx_path = input("Enter the path to the ONNX model (e.g ./realesrgan.onnx): ")
10
+
11
+ engine = Engine(trt_path)
12
+
13
+ torch.cuda.empty_cache()
14
+
15
+ s = time.time()
16
+ ret = engine.build(
17
+ onnx_path,
18
+ use_fp16,
19
+ enable_preview=True,
20
+ input_profile=[
21
+ {"input": [(1,3,256,256), (1,3,512,512), (1,3,1280,1280)]}, # any sizes from 256x256 to 1280x1280
22
+ ],
23
+ )
24
+ e = time.time()
25
+ print(f"Time taken to build: {(e-s)} seconds")
26
+
27
+ return ret
28
+
29
+ export_trt()
scripts/export_trt_from_directory.py ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import torch
3
+ import time
4
+ from utilities import Engine
5
+
6
+ def export_trt(trt_path=None, onnx_path=None, use_fp16=True):
7
+ option = input("Choose an option:\n1. Convert a single ONNX file\n2. Convert all ONNX files in a directory\nEnter your choice (1 or 2): ")
8
+
9
+ if option == '1':
10
+ onnx_path = input("Enter the path to the ONNX model (e.g ./realesrgan.onnx): ")
11
+ onnx_files = [onnx_path]
12
+ trt_dir = input("Enter the path to save the TensorRT engine (e.g ./trt_engine/): ")
13
+ elif option == '2':
14
+ onnx_dir = input("Enter the directory path containing ONNX models (e.g ./onnx_models/): ")
15
+ onnx_files = [os.path.join(onnx_dir, file) for file in os.listdir(onnx_dir) if file.endswith('.onnx')]
16
+ if not onnx_files:
17
+ raise ValueError(f"No .onnx files found in directory: {onnx_dir}")
18
+ trt_dir = input("Enter the directory path to save the TensorRT engines (e.g ./trt_engine/): ")
19
+ else:
20
+ raise ValueError("Invalid option. Please choose either 1 or 2.")
21
+
22
+ # Check if trt_dir already exists as a directory
23
+ if not os.path.exists(trt_dir):
24
+ os.makedirs(trt_dir)
25
+
26
+ #os.makedirs(trt_dir, exist_ok=True)
27
+ total_files = len(onnx_files)
28
+ for index, onnx_path in enumerate(onnx_files):
29
+ engine = Engine(trt_path)
30
+
31
+ torch.cuda.empty_cache()
32
+ base_name = os.path.splitext(os.path.basename(onnx_path))[0]
33
+ trt_path = os.path.join(trt_dir, f"{base_name}.engine")
34
+
35
+ print(f"Converting {onnx_path} to {trt_path}")
36
+
37
+ s = time.time()
38
+
39
+ # Initialize Engine with trt_path and clear CUDA cache
40
+ engine = Engine(trt_path)
41
+ torch.cuda.empty_cache()
42
+
43
+ ret = engine.build(
44
+ onnx_path,
45
+ use_fp16,
46
+ enable_preview=True,
47
+ input_profile=[
48
+ {"input": [(1,3,256,256), (1,3,512,512), (1,3,1280,1280)]}, # any sizes from 256x256 to 1280x1280
49
+ ],
50
+ )
51
+
52
+ e = time.time()
53
+ print(f"Time taken to build: {(e-s)} seconds")
54
+ if index < total_files - 1:
55
+ # Delay for 10 seconds
56
+ print("Delaying for 10 seconds...")
57
+ time.sleep(10)
58
+ print("Resuming operations after delay...")
59
+
60
+ return
61
+
62
+ export_trt()
trt_utilities.py ADDED
@@ -0,0 +1,283 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #
2
+ # Copyright 2022 The HuggingFace Inc. team.
3
+ # SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+ #
18
+ import torch
19
+ from torch.cuda import nvtx
20
+ from collections import OrderedDict
21
+ import numpy as np
22
+ from polygraphy.backend.common import bytes_from_path
23
+ from polygraphy import util
24
+ from polygraphy.backend.trt import ModifyNetworkOutputs, Profile
25
+ from polygraphy.backend.trt import (
26
+ engine_from_bytes,
27
+ engine_from_network,
28
+ network_from_onnx_path,
29
+ save_engine,
30
+ )
31
+ from polygraphy.logger import G_LOGGER
32
+ import tensorrt as trt
33
+ from logging import error, warning
34
+ from tqdm import tqdm
35
+ import copy
36
+
37
+ TRT_LOGGER = trt.Logger(trt.Logger.ERROR)
38
+ G_LOGGER.module_severity = G_LOGGER.ERROR
39
+
40
+ # Map of numpy dtype -> torch dtype
41
+ numpy_to_torch_dtype_dict = {
42
+ np.uint8: torch.uint8,
43
+ np.int8: torch.int8,
44
+ np.int16: torch.int16,
45
+ np.int32: torch.int32,
46
+ np.int64: torch.int64,
47
+ np.float16: torch.float16,
48
+ np.float32: torch.float32,
49
+ np.float64: torch.float64,
50
+ np.complex64: torch.complex64,
51
+ np.complex128: torch.complex128,
52
+ }
53
+ if np.version.full_version >= "1.24.0":
54
+ numpy_to_torch_dtype_dict[np.bool_] = torch.bool
55
+ else:
56
+ numpy_to_torch_dtype_dict[np.bool] = torch.bool
57
+
58
+ # Map of torch dtype -> numpy dtype
59
+ torch_to_numpy_dtype_dict = {
60
+ value: key for (key, value) in numpy_to_torch_dtype_dict.items()
61
+ }
62
+
63
+ class TQDMProgressMonitor(trt.IProgressMonitor):
64
+ def __init__(self):
65
+ trt.IProgressMonitor.__init__(self)
66
+ self._active_phases = {}
67
+ self._step_result = True
68
+ self.max_indent = 5
69
+
70
+ def phase_start(self, phase_name, parent_phase, num_steps):
71
+ leave = False
72
+ try:
73
+ if parent_phase is not None:
74
+ nbIndents = (
75
+ self._active_phases.get(parent_phase, {}).get(
76
+ "nbIndents", self.max_indent
77
+ )
78
+ + 1
79
+ )
80
+ if nbIndents >= self.max_indent:
81
+ return
82
+ else:
83
+ nbIndents = 0
84
+ leave = True
85
+ self._active_phases[phase_name] = {
86
+ "tq": tqdm(
87
+ total=num_steps, desc=phase_name, leave=leave, position=nbIndents
88
+ ),
89
+ "nbIndents": nbIndents,
90
+ "parent_phase": parent_phase,
91
+ }
92
+ except KeyboardInterrupt:
93
+ # The phase_start callback cannot directly cancel the build, so request the cancellation from within step_complete.
94
+ _step_result = False
95
+
96
+ def phase_finish(self, phase_name):
97
+ try:
98
+ if phase_name in self._active_phases.keys():
99
+ self._active_phases[phase_name]["tq"].update(
100
+ self._active_phases[phase_name]["tq"].total
101
+ - self._active_phases[phase_name]["tq"].n
102
+ )
103
+
104
+ parent_phase = self._active_phases[phase_name].get("parent_phase", None)
105
+ while parent_phase is not None:
106
+ self._active_phases[parent_phase]["tq"].refresh()
107
+ parent_phase = self._active_phases[parent_phase].get(
108
+ "parent_phase", None
109
+ )
110
+ if (
111
+ self._active_phases[phase_name]["parent_phase"]
112
+ in self._active_phases.keys()
113
+ ):
114
+ self._active_phases[
115
+ self._active_phases[phase_name]["parent_phase"]
116
+ ]["tq"].refresh()
117
+ del self._active_phases[phase_name]
118
+ pass
119
+ except KeyboardInterrupt:
120
+ _step_result = False
121
+
122
+ def step_complete(self, phase_name, step):
123
+ try:
124
+ if phase_name in self._active_phases.keys():
125
+ self._active_phases[phase_name]["tq"].update(
126
+ step - self._active_phases[phase_name]["tq"].n
127
+ )
128
+ return self._step_result
129
+ except KeyboardInterrupt:
130
+ # There is no need to propagate this exception to TensorRT. We can simply cancel the build.
131
+ return False
132
+
133
+
134
+ class Engine:
135
+ def __init__(
136
+ self,
137
+ engine_path,
138
+ ):
139
+ self.engine_path = engine_path
140
+ self.engine = None
141
+ self.context = None
142
+ self.buffers = OrderedDict()
143
+ self.tensors = OrderedDict()
144
+ self.cuda_graph_instance = None # cuda graph
145
+
146
+ def __del__(self):
147
+ del self.engine
148
+ del self.context
149
+ del self.buffers
150
+ del self.tensors
151
+
152
+ def reset(self, engine_path=None):
153
+ # del self.engine
154
+ del self.context
155
+ del self.buffers
156
+ del self.tensors
157
+ # self.engine_path = engine_path
158
+
159
+ self.context = None
160
+ self.buffers = OrderedDict()
161
+ self.tensors = OrderedDict()
162
+ self.inputs = {}
163
+ self.outputs = {}
164
+
165
+ def build(
166
+ self,
167
+ onnx_path,
168
+ fp16,
169
+ input_profile=None,
170
+ enable_refit=False,
171
+ enable_preview=False,
172
+ enable_all_tactics=False,
173
+ timing_cache=None,
174
+ update_output_names=None,
175
+ ):
176
+ p = [Profile()]
177
+ if input_profile:
178
+ p = [Profile() for i in range(len(input_profile))]
179
+ for _p, i_profile in zip(p, input_profile):
180
+ for name, dims in i_profile.items():
181
+ assert len(dims) == 3
182
+ _p.add(name, min=dims[0], opt=dims[1], max=dims[2])
183
+
184
+ config_kwargs = {}
185
+ if not enable_all_tactics:
186
+ config_kwargs["tactic_sources"] = []
187
+
188
+ network = network_from_onnx_path(
189
+ onnx_path, flags=[trt.OnnxParserFlag.NATIVE_INSTANCENORM]
190
+ )
191
+ if update_output_names:
192
+ print(f"Updating network outputs to {update_output_names}")
193
+ network = ModifyNetworkOutputs(network, update_output_names)
194
+
195
+ builder = network[0]
196
+ config = builder.create_builder_config()
197
+ config.progress_monitor = TQDMProgressMonitor()
198
+
199
+ config.set_flag(trt.BuilderFlag.FP16) if fp16 else None
200
+ config.set_flag(trt.BuilderFlag.REFIT) if enable_refit else None
201
+
202
+ profiles = copy.deepcopy(p)
203
+ for profile in profiles:
204
+ # Last profile is used for set_calibration_profile.
205
+ calib_profile = profile.fill_defaults(network[1]).to_trt(
206
+ builder, network[1]
207
+ )
208
+ config.add_optimization_profile(calib_profile)
209
+
210
+ try:
211
+ engine = engine_from_network(
212
+ network,
213
+ config,
214
+ )
215
+ except Exception as e:
216
+ error(f"Failed to build engine: {e}")
217
+ return 1
218
+ try:
219
+ save_engine(engine, path=self.engine_path)
220
+ except Exception as e:
221
+ error(f"Failed to save engine: {e}")
222
+ return 1
223
+ return 0
224
+
225
+ def load(self):
226
+ self.engine = engine_from_bytes(bytes_from_path(self.engine_path))
227
+
228
+ def activate(self, reuse_device_memory=None):
229
+ if reuse_device_memory:
230
+ self.context = self.engine.create_execution_context_without_device_memory()
231
+ # self.context.device_memory = reuse_device_memory
232
+ else:
233
+ self.context = self.engine.create_execution_context()
234
+
235
+ def allocate_buffers(self, shape_dict=None, device="cuda"):
236
+ nvtx.range_push("allocate_buffers")
237
+ for idx in range(self.engine.num_io_tensors):
238
+ name = self.engine.get_tensor_name(idx)
239
+ binding = self.engine[idx]
240
+ if shape_dict and binding in shape_dict:
241
+ shape = shape_dict[binding]["shape"]
242
+ else:
243
+ shape = self.context.get_tensor_shape(name)
244
+
245
+ dtype = trt.nptype(self.engine.get_tensor_dtype(name))
246
+ if self.engine.get_tensor_mode(name) == trt.TensorIOMode.INPUT:
247
+ self.context.set_input_shape(name, shape)
248
+ tensor = torch.empty(
249
+ tuple(shape), dtype=numpy_to_torch_dtype_dict[dtype]
250
+ ).to(device=device)
251
+ self.tensors[binding] = tensor
252
+ nvtx.range_pop()
253
+
254
+ def infer(self, feed_dict, stream, use_cuda_graph=False):
255
+ nvtx.range_push("set_tensors")
256
+ for name, buf in feed_dict.items():
257
+ self.tensors[name].copy_(buf)
258
+
259
+ for name, tensor in self.tensors.items():
260
+ self.context.set_tensor_address(name, tensor.data_ptr())
261
+ nvtx.range_pop()
262
+ nvtx.range_push("execute")
263
+ noerror = self.context.execute_async_v3(stream)
264
+ if not noerror:
265
+ raise ValueError("ERROR: inference failed.")
266
+ nvtx.range_pop()
267
+ return self.tensors
268
+
269
+ def __str__(self):
270
+ out = ""
271
+
272
+ # When raising errors in the upscaler, this str() called by comfy's execution.py,
273
+ # but the engine won't have the attributes required for stringification
274
+ # If str() also raises an error, comfy gets soft-locked, not running prompts until restarted
275
+ if not hasattr(self.engine, "num_optimization_profiles") or not hasattr(self.engine, "num_bindings"):
276
+ return out
277
+
278
+ for opt_profile in range(self.engine.num_optimization_profiles):
279
+ for binding_idx in range(self.engine.num_bindings):
280
+ name = self.engine.get_binding_name(binding_idx)
281
+ shape = self.engine.get_profile_shape(opt_profile, name)
282
+ out += f"\t{name} = {shape}\n"
283
+ return out
utilities.py ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import requests
2
+ from tqdm import tqdm
3
+ import logging
4
+ import sys
5
+
6
+ class ColoredLogger:
7
+ COLORS = {
8
+ 'RED': '\033[91m',
9
+ 'GREEN': '\033[92m',
10
+ 'YELLOW': '\033[93m',
11
+ 'BLUE': '\033[94m',
12
+ 'MAGENTA': '\033[95m',
13
+ 'RESET': '\033[0m'
14
+ }
15
+
16
+ LEVEL_COLORS = {
17
+ 'DEBUG': COLORS['BLUE'],
18
+ 'INFO': COLORS['GREEN'],
19
+ 'WARNING': COLORS['YELLOW'],
20
+ 'ERROR': COLORS['RED'],
21
+ 'CRITICAL': COLORS['MAGENTA']
22
+ }
23
+
24
+ def __init__(self, name="MY-APP"):
25
+ self.logger = logging.getLogger(name)
26
+ self.logger.setLevel(logging.DEBUG)
27
+ self.app_name = name
28
+
29
+ # Prevent message propagation to parent loggers
30
+ self.logger.propagate = False
31
+
32
+ # Clear existing handlers
33
+ self.logger.handlers = []
34
+
35
+ # Create console handler
36
+ handler = logging.StreamHandler(sys.stdout)
37
+ handler.setLevel(logging.DEBUG)
38
+
39
+ # Custom formatter class to handle colored components
40
+ class ColoredFormatter(logging.Formatter):
41
+ def format(self, record):
42
+ # Color the level name according to severity
43
+ level_color = ColoredLogger.LEVEL_COLORS.get(record.levelname, '')
44
+ colored_levelname = f"{level_color}{record.levelname}{ColoredLogger.COLORS['RESET']}"
45
+
46
+ # Color the logger name in blue
47
+ colored_name = f"{ColoredLogger.COLORS['BLUE']}{record.name}{ColoredLogger.COLORS['RESET']}"
48
+
49
+ # Set the colored components
50
+ record.levelname = colored_levelname
51
+ record.name = colored_name
52
+
53
+ return super().format(record)
54
+
55
+ # Create formatter with the new format
56
+ formatter = ColoredFormatter('[%(name)s|%(levelname)s] - %(message)s')
57
+ handler.setFormatter(formatter)
58
+
59
+ self.logger.addHandler(handler)
60
+
61
+
62
+ def debug(self, message):
63
+ self.logger.debug(f"{self.COLORS['BLUE']}{message}{self.COLORS['RESET']}")
64
+
65
+ def info(self, message):
66
+ self.logger.info(f"{self.COLORS['GREEN']}{message}{self.COLORS['RESET']}")
67
+
68
+ def warning(self, message):
69
+ self.logger.warning(f"{self.COLORS['YELLOW']}{message}{self.COLORS['RESET']}")
70
+
71
+ def error(self, message):
72
+ self.logger.error(f"{self.COLORS['RED']}{message}{self.COLORS['RESET']}")
73
+
74
+ def critical(self, message):
75
+ self.logger.critical(f"{self.COLORS['MAGENTA']}{message}{self.COLORS['RESET']}")
76
+
77
+ def download_file(url, save_path):
78
+ """
79
+ Download a file from URL with progress bar
80
+
81
+ Args:
82
+ url (str): URL of the file to download
83
+ save_path (str): Path to save the file as
84
+ """
85
+ GREEN = '\033[92m'
86
+ RESET = '\033[0m'
87
+ response = requests.get(url, stream=True)
88
+ total_size = int(response.headers.get('content-length', 0))
89
+
90
+ with open(save_path, 'wb') as file, tqdm(
91
+ desc=save_path,
92
+ total=total_size,
93
+ unit='iB',
94
+ unit_scale=True,
95
+ unit_divisor=1024,
96
+ colour='green',
97
+ bar_format=f'{GREEN}{{l_bar}}{{bar}}{RESET}{GREEN}{{r_bar}}{RESET}'
98
+ ) as progress_bar:
99
+ for data in response.iter_content(chunk_size=1024):
100
+ size = file.write(data)
101
+ progress_bar.update(size)
102
+
103
+ def get_final_resolutions(width, height, resize_to):
104
+ final_width = None
105
+ final_height = None
106
+ aspect_ratio = float(width/height)
107
+
108
+ match resize_to:
109
+ case "HD":
110
+ final_width = 1280
111
+ final_height = 720
112
+ case "FHD":
113
+ final_width = 1920
114
+ final_height = 1080
115
+ case "2k":
116
+ final_width = 2560
117
+ final_height = 1440
118
+ case "4k":
119
+ final_width = 3840
120
+ final_height = 2160
121
+ case "none":
122
+ final_width = width*4
123
+ final_height = height*4
124
+ case "2x":
125
+ final_width = width*2
126
+ final_height = height*2
127
+ case "3x":
128
+ final_width = width*3
129
+ final_height = height*3
130
+
131
+ if aspect_ratio == 1.0:
132
+ final_width = final_height
133
+
134
+ if aspect_ratio < 1.0 and resize_to not in ("none", "2x", "3x"):
135
+ temp = final_width
136
+ final_width = final_height
137
+ final_height = temp
138
+
139
+ return (final_width, final_height)