Compare commits
824 Commits
version3.1
...
version3.5
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2f83b60fb3 | ||
|
|
12c8cd75ee | ||
|
|
0e21e3e2e7 | ||
|
|
fda1e87278 | ||
|
|
1092031d77 | ||
|
|
f0482d3bae | ||
|
|
b6ac3d0d6c | ||
|
|
3344ffcb8b | ||
|
|
82936f71b6 | ||
|
|
51e809c09e | ||
|
|
713df396dc | ||
|
|
23a42d93df | ||
|
|
0ef06683dc | ||
|
|
843113ba0f | ||
|
|
79080290c6 | ||
|
|
9bd2023a8e | ||
|
|
0d6e32d31a | ||
|
|
0418257218 | ||
|
|
a3e6fc0141 | ||
|
|
1dd165a3cd | ||
|
|
e666b5269e | ||
|
|
0b70e9df7b | ||
|
|
1639796041 | ||
|
|
d0af074225 | ||
|
|
6d7f3feab3 | ||
|
|
045b7f6312 | ||
|
|
116b7ce12f | ||
|
|
8b0905c076 | ||
|
|
b69140307b | ||
|
|
b31abbcad3 | ||
|
|
2d5a1fbc12 | ||
|
|
89de49f31e | ||
|
|
a208782049 | ||
|
|
eb802ee975 | ||
|
|
f40d48b014 | ||
|
|
ef4203f5ca | ||
|
|
adf93195e8 | ||
|
|
3e5cdbaf68 | ||
|
|
27cab3b38a | ||
|
|
09d38e4abf | ||
|
|
7efb5cb6f5 | ||
|
|
31ff6e1e7a | ||
|
|
2fa3d47887 | ||
|
|
2cca46375c | ||
|
|
06410b593c | ||
|
|
545c9f47de | ||
|
|
973ad41bde | ||
|
|
3fa7416eb2 | ||
|
|
ec76d3dcc4 | ||
|
|
3f27bec94b | ||
|
|
ed11269aef | ||
|
|
6c653734ec | ||
|
|
19bd0c35ed | ||
|
|
3f4c4ebc29 | ||
|
|
6cc7d4ed69 | ||
|
|
67fff17917 | ||
|
|
8fce49fa02 | ||
|
|
30f28b37c3 | ||
|
|
6a5681dd0a | ||
|
|
dacc282763 | ||
|
|
9720bec5e5 | ||
|
|
8b3b883fce | ||
|
|
4dc0f8e57a | ||
|
|
5e48fc98ed | ||
|
|
2ff8dc787e | ||
|
|
cd38d1697c | ||
|
|
00f63cb0bc | ||
|
|
dc7fab3c19 | ||
|
|
d1b5359e2b | ||
|
|
0597ffea2e | ||
|
|
d16329c1af | ||
|
|
d5b4d7ab90 | ||
|
|
8199a9a12e | ||
|
|
cb10a8abec | ||
|
|
0dbcda89b7 | ||
|
|
78a8259b82 | ||
|
|
f22fdb4f94 | ||
|
|
450645a9d0 | ||
|
|
af23730f8f | ||
|
|
0b11260d6f | ||
|
|
31ab97dd09 | ||
|
|
c0c4834cfc | ||
|
|
2dae40f4ba | ||
|
|
587c7400d1 | ||
|
|
8dd2e2a6b7 | ||
|
|
aaf4f37403 | ||
|
|
3e2e81a968 | ||
|
|
cc1be5585b | ||
|
|
5050016b22 | ||
|
|
7662196514 | ||
|
|
8ddaca09e0 | ||
|
|
71c692dcef | ||
|
|
184e417fec | ||
|
|
7a99560183 | ||
|
|
48f4d6aa2a | ||
|
|
c17fc2a9b5 | ||
|
|
4d70b3786f | ||
|
|
9bee676cd2 | ||
|
|
0a37106692 | ||
|
|
57d4541d4e | ||
|
|
d7dd586f09 | ||
|
|
b6b53ce2a4 | ||
|
|
43809c107d | ||
|
|
1721edc990 | ||
|
|
bfb7aab4a0 | ||
|
|
f4a87d6380 | ||
|
|
c0c337988f | ||
|
|
27f65c251a | ||
|
|
87f099f740 | ||
|
|
484f16e365 | ||
|
|
37afcc709b | ||
|
|
9cbe9f240d | ||
|
|
f6567c02f6 | ||
|
|
8c83061a93 | ||
|
|
23f2adfdc3 | ||
|
|
61698444b1 | ||
|
|
109afcf8f6 | ||
|
|
19ef6a530a | ||
|
|
e08bd9669e | ||
|
|
155a7e1174 | ||
|
|
86e33ea99a | ||
|
|
524684f8bd | ||
|
|
2a362cec84 | ||
|
|
2747c23868 | ||
|
|
f446dbb62d | ||
|
|
8d37d94e2c | ||
|
|
e4ba0e6c85 | ||
|
|
4216c5196e | ||
|
|
2df660a718 | ||
|
|
bb496a9c2c | ||
|
|
4e0737c0c2 | ||
|
|
4bb3cba5c8 | ||
|
|
08b9b0d140 | ||
|
|
3577a72a3b | ||
|
|
0328d6f498 | ||
|
|
d437305a4f | ||
|
|
c4899bcb20 | ||
|
|
4295764f8c | ||
|
|
e4e2430255 | ||
|
|
1732127a28 | ||
|
|
56bb8b6498 | ||
|
|
e93b6fa3a6 | ||
|
|
dd4ba0ea22 | ||
|
|
c2701c9ce5 | ||
|
|
2f019ce359 | ||
|
|
c5b147aeb7 | ||
|
|
5813d65e52 | ||
|
|
a393edfaa4 | ||
|
|
dd7a01cda5 | ||
|
|
00a3b91f95 | ||
|
|
61ba544282 | ||
|
|
b5b8c123e4 | ||
|
|
d9ceba959f | ||
|
|
6b5b040701 | ||
|
|
4f4c09a5f3 | ||
|
|
067bc97cce | ||
|
|
7368580cd6 | ||
|
|
df90db210c | ||
|
|
0927ed20a2 | ||
|
|
73b22f85be | ||
|
|
b8d77557b0 | ||
|
|
99b8fce8f3 | ||
|
|
16364f1b2d | ||
|
|
3b88e00cfb | ||
|
|
0c8c539e9b | ||
|
|
fd549fb986 | ||
|
|
babb775cfb | ||
|
|
eef9e470c9 | ||
|
|
3002c6318a | ||
|
|
6d0bceaebd | ||
|
|
aa51d6fde6 | ||
|
|
136479e218 | ||
|
|
19a2742354 | ||
|
|
45aac96dd3 | ||
|
|
6f21ae8939 | ||
|
|
add98f4eeb | ||
|
|
fe231f72b6 | ||
|
|
b308fde480 | ||
|
|
f3e14ff806 | ||
|
|
79ef9bdf1c | ||
|
|
a3e938aee9 | ||
|
|
b19a6155f4 | ||
|
|
801f7342b1 | ||
|
|
4829fa0f35 | ||
|
|
3671f4208e | ||
|
|
e8c51181ee | ||
|
|
3ccbb4d6fb | ||
|
|
93fe457e99 | ||
|
|
afac657aaa | ||
|
|
3e5c32860a | ||
|
|
d577bb38b6 | ||
|
|
418bc32b39 | ||
|
|
7148ea0596 | ||
|
|
87adb17df4 | ||
|
|
3fcee3762d | ||
|
|
1f014779e4 | ||
|
|
97879e73ef | ||
|
|
13d4cd3237 | ||
|
|
73e835885b | ||
|
|
2524c908fc | ||
|
|
0e71d81bb3 | ||
|
|
a47864888f | ||
|
|
9b61ac807c | ||
|
|
bc200dc555 | ||
|
|
2c18b84517 | ||
|
|
fe7b651c56 | ||
|
|
9b8f160788 | ||
|
|
801d5e2fc2 | ||
|
|
cecdd28e04 | ||
|
|
d364df1cd6 | ||
|
|
f51bc03686 | ||
|
|
c010d50716 | ||
|
|
acddb86f3a | ||
|
|
4fde0120ab | ||
|
|
592a354eef | ||
|
|
bd66cf3d8b | ||
|
|
e6e5174734 | ||
|
|
13ade82677 | ||
|
|
ce9eb8d20a | ||
|
|
dd47c0a284 | ||
|
|
f725ab1b31 | ||
|
|
7ce4192c52 | ||
|
|
c06aafb642 | ||
|
|
b298c5416c | ||
|
|
94abf302cb | ||
|
|
fcc5534e66 | ||
|
|
56c0e4d575 | ||
|
|
8a10db618e | ||
|
|
1fe66f0291 | ||
|
|
ced977c443 | ||
|
|
6c2ffbae52 | ||
|
|
be2f54fac9 | ||
|
|
87b5e56378 | ||
|
|
3a5764ed34 | ||
|
|
91aee50ea7 | ||
|
|
e5ccedf491 | ||
|
|
f620666a58 | ||
|
|
594c63e5d6 | ||
|
|
67d9051890 | ||
|
|
be96232127 | ||
|
|
3b5bc7a784 | ||
|
|
5e92f437a1 | ||
|
|
eabd9d312f | ||
|
|
0da6fe78ac | ||
|
|
be990380a0 | ||
|
|
9c0bc48420 | ||
|
|
5c0d34793e | ||
|
|
37fc550652 | ||
|
|
2c1d6ac212 | ||
|
|
8c699c1b26 | ||
|
|
c620fa9011 | ||
|
|
f16fd60211 | ||
|
|
9674e59d26 | ||
|
|
643c5e125a | ||
|
|
e5099e1daa | ||
|
|
3e621bbec1 | ||
|
|
bb1d5a61c0 | ||
|
|
fd3d0be2d8 | ||
|
|
ae623258f3 | ||
|
|
cda281f08b | ||
|
|
9f8e7a6efa | ||
|
|
57643dd2b6 | ||
|
|
6bc8a78cfe | ||
|
|
d2700e97fb | ||
|
|
c4dd81dc9a | ||
|
|
e9b06d7cde | ||
|
|
6e6ea69611 | ||
|
|
b082b5eb1b | ||
|
|
9648d78453 | ||
|
|
16c17eb077 | ||
|
|
2dc8718041 | ||
|
|
a330d6636e | ||
|
|
322c4be145 | ||
|
|
a3596ff60d | ||
|
|
e11d8132f8 | ||
|
|
59877dd728 | ||
|
|
5f7ffef238 | ||
|
|
41c10f5688 | ||
|
|
d7ac99f603 | ||
|
|
1616daae6a | ||
|
|
a1092d8f92 | ||
|
|
34ca9f138f | ||
|
|
df3f1aa3ca | ||
|
|
bf805cf477 | ||
|
|
ecb08e69be | ||
|
|
28c1e3f11b | ||
|
|
403667aec1 | ||
|
|
22f377e2fb | ||
|
|
37172906ef | ||
|
|
3b78e0538b | ||
|
|
d8f9ac71d0 | ||
|
|
aced272d3c | ||
|
|
aff77a086d | ||
|
|
49253c4dc6 | ||
|
|
1a00093015 | ||
|
|
64f76e7401 | ||
|
|
eb4c07997e | ||
|
|
99cf7205c3 | ||
|
|
d684b4cdb3 | ||
|
|
601a95c948 | ||
|
|
e18bef2e9c | ||
|
|
f654c1af31 | ||
|
|
e90048a671 | ||
|
|
ea624b1510 | ||
|
|
057e3dda3c | ||
|
|
4290821a50 | ||
|
|
280e14d7b7 | ||
|
|
9f0cf9fb2b | ||
|
|
b8560b7510 | ||
|
|
d841d13b04 | ||
|
|
efda9e5193 | ||
|
|
33d2e75aac | ||
|
|
74941170aa | ||
|
|
cd38949903 | ||
|
|
d87f1eb171 | ||
|
|
cd1e4e1ba7 | ||
|
|
cf5f348d70 | ||
|
|
0ee25f475e | ||
|
|
1fede6df7f | ||
|
|
22a65cd163 | ||
|
|
538b041ea3 | ||
|
|
d7b056576d | ||
|
|
cb0bb6ab4a | ||
|
|
bf955aaf12 | ||
|
|
61eb0da861 | ||
|
|
5da633d94d | ||
|
|
f3e4e26e2f | ||
|
|
af7734dd35 | ||
|
|
d5bab093f9 | ||
|
|
f94b167dc2 | ||
|
|
951d5ec758 | ||
|
|
016d8ee156 | ||
|
|
dca9ec4bae | ||
|
|
a06e43c96b | ||
|
|
29c6bfb6cb | ||
|
|
8d7ee975a0 | ||
|
|
4bafbb3562 | ||
|
|
7fdf0a8e51 | ||
|
|
2bb13b4677 | ||
|
|
9a5a509dd9 | ||
|
|
cbcb98ef6a | ||
|
|
bb864c6313 | ||
|
|
6d849eeb12 | ||
|
|
ef752838b0 | ||
|
|
73d4a1ff4b | ||
|
|
8c62f21aa6 | ||
|
|
c40ebfc21f | ||
|
|
c365ea9f57 | ||
|
|
12d66777cc | ||
|
|
9ac3d0d65d | ||
|
|
9fd212652e | ||
|
|
790a1cf12a | ||
|
|
3ecf2977a8 | ||
|
|
aeddf6b461 | ||
|
|
ce0d8b9dab | ||
|
|
3c00e7a143 | ||
|
|
ef1bfdd60f | ||
|
|
e48d92e82e | ||
|
|
110510997f | ||
|
|
b52695845e | ||
|
|
f30c9c6d3b | ||
|
|
ff5403eac6 | ||
|
|
f9226d92be | ||
|
|
a0ea5d0e9e | ||
|
|
ce6f11d200 | ||
|
|
10b3001dba | ||
|
|
e2de1d76ea | ||
|
|
77cc141a82 | ||
|
|
526b4d8ecd | ||
|
|
149db621ec | ||
|
|
2e1bb7311c | ||
|
|
dae65fd2c2 | ||
|
|
9aafb2ee47 | ||
|
|
6bc91bd02e | ||
|
|
8ef7344101 | ||
|
|
40da1b0afe | ||
|
|
c65def90f3 | ||
|
|
ddeaf76422 | ||
|
|
f23b66dec2 | ||
|
|
a26b294817 | ||
|
|
66018840da | ||
|
|
cea2144f34 | ||
|
|
7f5be93c1d | ||
|
|
85b838b302 | ||
|
|
27f97ba92a | ||
|
|
14269eba98 | ||
|
|
d5c9bc9f0a | ||
|
|
b0fed3edfc | ||
|
|
7296d054a2 | ||
|
|
d57c7d352d | ||
|
|
3fd2927ea3 | ||
|
|
b745074160 | ||
|
|
70ee810133 | ||
|
|
68fea9e79b | ||
|
|
f82bf91aa8 | ||
|
|
dde9edcc0c | ||
|
|
66c78e459e | ||
|
|
de54102303 | ||
|
|
7c7d2d8a84 | ||
|
|
834f989ed4 | ||
|
|
b658ee6e04 | ||
|
|
1a60280ea0 | ||
|
|
991cb7d272 | ||
|
|
463991cfb2 | ||
|
|
06f10b5fdc | ||
|
|
d275d012c6 | ||
|
|
c5d1ea3e21 | ||
|
|
0022b92404 | ||
|
|
ef61221241 | ||
|
|
5a1831db98 | ||
|
|
a643f8b0db | ||
|
|
601712fd0a | ||
|
|
e769f831c7 | ||
|
|
dcd952671f | ||
|
|
06564df038 | ||
|
|
2f037f30d5 | ||
|
|
efedab186d | ||
|
|
f49cae5116 | ||
|
|
2b620ccf2e | ||
|
|
a1b7a4da56 | ||
|
|
61b0e49fed | ||
|
|
f60dc371db | ||
|
|
0a3433b8ac | ||
|
|
31bce54abb | ||
|
|
5db1530717 | ||
|
|
c32929fd11 | ||
|
|
3e4c2b056c | ||
|
|
e79e9d7d23 | ||
|
|
d175b93072 | ||
|
|
ed254687d2 | ||
|
|
c0392f7074 | ||
|
|
f437712af7 | ||
|
|
6d1ea643e9 | ||
|
|
9e84cfcd46 | ||
|
|
897695d29f | ||
|
|
1dcc2873d2 | ||
|
|
42cf738a31 | ||
|
|
e4646789af | ||
|
|
e6c3aabd45 | ||
|
|
6789d1fab4 | ||
|
|
7a733f00a2 | ||
|
|
dd55888f0e | ||
|
|
0327df22eb | ||
|
|
e544f5e9d0 | ||
|
|
0fad4f44a4 | ||
|
|
1240dd6f26 | ||
|
|
d6be947177 | ||
|
|
3cfbdce9f2 | ||
|
|
1ee471ff57 | ||
|
|
25ccecf8e3 | ||
|
|
9e991bfa3e | ||
|
|
221efd0193 | ||
|
|
976b9bf65f | ||
|
|
ae5783e383 | ||
|
|
30224af042 | ||
|
|
8ff7c15cd8 | ||
|
|
f3205994ea | ||
|
|
ec8cc48a4d | ||
|
|
5d75c578b9 | ||
|
|
cd411c2eea | ||
|
|
bb2f276ba5 | ||
|
|
348e50c0c9 | ||
|
|
9d7fc31706 | ||
|
|
3108b4a426 | ||
|
|
3da12b5bf7 | ||
|
|
12710ff1fa | ||
|
|
e7df3a551d | ||
|
|
7947c968ad | ||
|
|
3dd15dee61 | ||
|
|
b4f0be329b | ||
|
|
e3f903d132 | ||
|
|
e18ab0afc0 | ||
|
|
2b61556acc | ||
|
|
51c075ec3c | ||
|
|
e22f1917b2 | ||
|
|
ed53442942 | ||
|
|
fad502a938 | ||
|
|
4c0c1034db | ||
|
|
1c029e1276 | ||
|
|
bcfc0f0f74 | ||
|
|
bc8dc7f102 | ||
|
|
a099f98f0e | ||
|
|
2887720999 | ||
|
|
cc0e0a90a6 | ||
|
|
9256bcf68e | ||
|
|
e6cc28b0f6 | ||
|
|
e8bed9ce85 | ||
|
|
582010e6a1 | ||
|
|
dd05f29d66 | ||
|
|
746a607652 | ||
|
|
b87592f43d | ||
|
|
b9ec396d08 | ||
|
|
293ad9052d | ||
|
|
e6f292c14b | ||
|
|
0bda5c54ed | ||
|
|
bc613c74af | ||
|
|
35c3c0f2c6 | ||
|
|
cd3f2860f8 | ||
|
|
2fa9aa233c | ||
|
|
1275f77986 | ||
|
|
f0f88f5f48 | ||
|
|
42eef1bea7 | ||
|
|
728eba04ec | ||
|
|
694f12c97d | ||
|
|
a075e9631d | ||
|
|
ee84c144dd | ||
|
|
fffb78e7af | ||
|
|
db16e85d8c | ||
|
|
72b412267d | ||
|
|
e2137b896e | ||
|
|
6d557b3c34 | ||
|
|
76e0452619 | ||
|
|
e62c0b30ae | ||
|
|
d29f524cec | ||
|
|
b7e08229fa | ||
|
|
e38e6e22f5 | ||
|
|
f05862c854 | ||
|
|
fc762cbf7f | ||
|
|
c376e46f4d | ||
|
|
8d528190a9 | ||
|
|
d2fa4c80eb | ||
|
|
212ca0c0b9 | ||
|
|
c32c585384 | ||
|
|
62a596ef30 | ||
|
|
7d8338ce70 | ||
|
|
c46a8d27e6 | ||
|
|
d8540d42a6 | ||
|
|
f30bee2409 | ||
|
|
c7841fd998 | ||
|
|
254fac0045 | ||
|
|
5159a1e7a1 | ||
|
|
e2d75f1b62 | ||
|
|
4f77c27d6d | ||
|
|
e7080e671d | ||
|
|
b0c2e2d92b | ||
|
|
77a2d62ef6 | ||
|
|
c43e22bc41 | ||
|
|
be6b42324d | ||
|
|
3951159d55 | ||
|
|
6c448b9a60 | ||
|
|
43e64782dc | ||
|
|
5f79fed566 | ||
|
|
f2a55dc769 | ||
|
|
3f31fb9990 | ||
|
|
d795dc1a81 | ||
|
|
f90ec93dfc | ||
|
|
6d267947bb | ||
|
|
595e5cceae | ||
|
|
2291a67cf8 | ||
|
|
c0e57e0e39 | ||
|
|
dcd5f7996e | ||
|
|
303e4dd617 | ||
|
|
d52c0c4783 | ||
|
|
e4de1549a3 | ||
|
|
986653b43e | ||
|
|
08e184ea55 | ||
|
|
fdb9650cca | ||
|
|
dadbb71147 | ||
|
|
18a59598ea | ||
|
|
57297605e2 | ||
|
|
1134ec2df5 | ||
|
|
f54872007f | ||
|
|
24a832608c | ||
|
|
2fa52f71e7 | ||
|
|
00e7fbd7fa | ||
|
|
397dc2d0dc | ||
|
|
98269e8708 | ||
|
|
1bb45d4998 | ||
|
|
8f9c5c5039 | ||
|
|
88ac4cf0a7 | ||
|
|
624d203bbc | ||
|
|
84fc8647f7 | ||
|
|
a554b7f0e4 | ||
|
|
777850200d | ||
|
|
3f251e4571 | ||
|
|
2dd65af9f0 | ||
|
|
f8209e51f5 | ||
|
|
111a65e9e8 | ||
|
|
c0ed2131f0 | ||
|
|
10882b677d | ||
|
|
aed1b20ada | ||
|
|
68bdec12c0 | ||
|
|
1404811845 | ||
|
|
e92ae1eb2c | ||
|
|
0d0890cb92 | ||
|
|
a76f275691 | ||
|
|
cfcd45b8b9 | ||
|
|
9c72a6f6e9 | ||
|
|
da4e483d80 | ||
|
|
41f801129a | ||
|
|
caf7bf2b9a | ||
|
|
986e6461ed | ||
|
|
29d027087b | ||
|
|
7a687347e1 | ||
|
|
5b9a1e9531 | ||
|
|
b1154b368c | ||
|
|
4f0cd42117 | ||
|
|
f5ccc8bdc6 | ||
|
|
62d5775b79 | ||
|
|
00eb17b2e7 | ||
|
|
3c5df9c02e | ||
|
|
1626fbd9d6 | ||
|
|
36ff2092d7 | ||
|
|
3cf9c88891 | ||
|
|
78045001f2 | ||
|
|
5c57816230 | ||
|
|
fa395aac6e | ||
|
|
8dded0c435 | ||
|
|
933a865b10 | ||
|
|
6b8b14b11e | ||
|
|
5102ec8263 | ||
|
|
c1e4db243d | ||
|
|
4b9078a9dc | ||
|
|
62d14cfa3f | ||
|
|
bd6ec158d4 | ||
|
|
d2f04e2dd2 | ||
|
|
b47054c479 | ||
|
|
15c40bdaff | ||
|
|
44a71fdbf1 | ||
|
|
996a0486af | ||
|
|
a15eb56ee8 | ||
|
|
daef87da41 | ||
|
|
0b4d68fbee | ||
|
|
9f3d67e7bd | ||
|
|
47866ebe0e | ||
|
|
48a352bfd1 | ||
|
|
01ce265d77 | ||
|
|
478f3a737c | ||
|
|
b49ea55e24 | ||
|
|
7608c6c7ab | ||
|
|
ba6d91c5cc | ||
|
|
5de85153ba | ||
|
|
59a4bca053 | ||
|
|
1034769c78 | ||
|
|
947f50b516 | ||
|
|
1434a28fa8 | ||
|
|
78757411ca | ||
|
|
9b8e7e933b | ||
|
|
6da3289830 | ||
|
|
f6da72c9eb | ||
|
|
c17882af8a | ||
|
|
9f7cf7c4d8 | ||
|
|
97de15dfbe | ||
|
|
93801ff772 | ||
|
|
13f99fcab0 | ||
|
|
30d16989b7 | ||
|
|
1a796a5ade | ||
|
|
b7d3ed7135 | ||
|
|
30de8f1358 | ||
|
|
5a1bbb3874 | ||
|
|
3d3e54f0d1 | ||
|
|
bf75b29314 | ||
|
|
79cd98fc24 | ||
|
|
4b4836099d | ||
|
|
b25d3e274a | ||
|
|
a96bf9af2f | ||
|
|
a69ef7f8c5 | ||
|
|
896077009a | ||
|
|
988c5c24da | ||
|
|
8865b232ca | ||
|
|
815d949e12 | ||
|
|
33cd7068fb | ||
|
|
96aceedd25 | ||
|
|
c2d8bfd8c7 | ||
|
|
d85f9ee41b | ||
|
|
e5e3e0aa43 | ||
|
|
f187a23dc1 | ||
|
|
601c36e607 | ||
|
|
15b7cd6193 | ||
|
|
9d3b01af75 | ||
|
|
61ad51cf15 | ||
|
|
920dccd076 | ||
|
|
8fd21feb75 | ||
|
|
c960b34fac | ||
|
|
9ad00c78ba | ||
|
|
4c3eeee00d | ||
|
|
a6393d4d05 | ||
|
|
92f3c078b5 | ||
|
|
c53320182a | ||
|
|
1788cb4a89 | ||
|
|
6a268e17cd | ||
|
|
dbd8a80970 | ||
|
|
6c17f3e9c8 | ||
|
|
730940b60d | ||
|
|
71ba23b24a | ||
|
|
c12ac066b6 | ||
|
|
b6119ed827 | ||
|
|
a219512045 | ||
|
|
dfa31a8c16 | ||
|
|
984c7e9e12 | ||
|
|
86b654d6be | ||
|
|
8c16cda3e8 | ||
|
|
c295bb4f04 | ||
|
|
8720f79310 | ||
|
|
24bb174b63 | ||
|
|
bb788b9259 | ||
|
|
69540d07c5 | ||
|
|
34b767d1fd | ||
|
|
abd81cc215 | ||
|
|
1eb0174dff | ||
|
|
c23db4b4f9 | ||
|
|
6538c58b8e | ||
|
|
e35eb9048e | ||
|
|
a0fa64de47 | ||
|
|
e04946c816 | ||
|
|
231c9c2e57 | ||
|
|
48555f570c | ||
|
|
7c9195ddd2 | ||
|
|
5500fbe682 | ||
|
|
5a83b3b096 | ||
|
|
4783fd6f37 | ||
|
|
9a4b56277c | ||
|
|
5eea959103 | ||
|
|
856df8fb62 | ||
|
|
8e59412c47 | ||
|
|
8f571ff68f | ||
|
|
b6d2766e59 | ||
|
|
73ce471a0e | ||
|
|
4e113139c8 | ||
|
|
e4c4b28ddf | ||
|
|
081acc6404 | ||
|
|
1a999497d7 | ||
|
|
6137963355 | ||
|
|
22bffdb737 | ||
|
|
75adcbffeb | ||
|
|
4451770061 | ||
|
|
09c413a272 | ||
|
|
ddb6c90a8f | ||
|
|
71590426f9 | ||
|
|
b3e5cdb3a5 | ||
|
|
6595ab813e | ||
|
|
d1efbd26da | ||
|
|
f04683732e | ||
|
|
cb0241db78 | ||
|
|
a097b6cd03 | ||
|
|
487ffe7888 | ||
|
|
51424a7d08 | ||
|
|
06e8e8f9a6 | ||
|
|
0512b311f8 | ||
|
|
81d53d0726 | ||
|
|
a141c5ccdc | ||
|
|
e361d741c3 | ||
|
|
f5bc58dbde | ||
|
|
e7b73f3041 | ||
|
|
ed8db8c8ae | ||
|
|
df97213d3b | ||
|
|
97443d1f83 | ||
|
|
59bed52faf | ||
|
|
3814c3a915 | ||
|
|
d98d0a291e | ||
|
|
ee94fa6dc4 | ||
|
|
d2e46f6684 | ||
|
|
5948dcacd5 | ||
|
|
3041858e7f | ||
|
|
9c2a6bc413 | ||
|
|
1cf8b6c6c8 | ||
|
|
781ef4487c | ||
|
|
4a494354b1 | ||
|
|
385c775aa5 | ||
|
|
518385dea2 | ||
|
|
4d1eea7bd5 | ||
|
|
9cb51ccc70 | ||
|
|
94dc398163 | ||
|
|
65317e33af | ||
|
|
06fbdf43af | ||
|
|
ab61418410 | ||
|
|
0785ff2aed | ||
|
|
676fe40d39 | ||
|
|
0b89673ee9 | ||
|
|
2f4e050612 | ||
|
|
87d963bda5 | ||
|
|
07807e4653 | ||
|
|
2b96217f2b | ||
|
|
13342c2988 | ||
|
|
95f8b2824a | ||
|
|
4065d6e234 | ||
|
|
d3dcd432e8 | ||
|
|
7d14de79bf | ||
|
|
15c6b52b5f | ||
|
|
c0f1b5bc8e | ||
|
|
bd62c6be68 | ||
|
|
70bd21f09a | ||
|
|
a0f15f1512 | ||
|
|
4575046ce1 | ||
|
|
33ea7391b5 | ||
|
|
e90eee2d8e | ||
|
|
7d44210a48 | ||
|
|
206f4138b6 | ||
|
|
6d2807f499 | ||
|
|
f1234937c6 | ||
|
|
7beea951c6 | ||
|
|
6f7e8076c7 | ||
|
|
ae24fab441 | ||
|
|
880be21bf7 | ||
|
|
559b3cd6bb | ||
|
|
9d9df8aa57 | ||
|
|
64548d33a9 | ||
|
|
c3cafd8d6f | ||
|
|
e9a6efef7f | ||
|
|
89a75e26c3 | ||
|
|
1139d395f2 | ||
|
|
e20070939c | ||
|
|
3236fcca21 | ||
|
|
5353eba376 | ||
|
|
7339b06acb | ||
|
|
ce1fc3a999 | ||
|
|
a9a489231a | ||
|
|
e889590a91 | ||
|
|
9481405f6f | ||
|
|
7317d79a3c | ||
|
|
de0ed4a6f5 | ||
|
|
0ff838443e | ||
|
|
cfbfb68618 | ||
|
|
9945d5048a | ||
|
|
f0ff1f2c64 | ||
|
|
7dd73e1330 | ||
|
|
4cfbacdb26 | ||
|
|
26af2b1bb4 | ||
|
|
20bec70160 | ||
|
|
9b5f088793 | ||
|
|
3a561a70db | ||
|
|
11e33ec657 | ||
|
|
d1926725d3 | ||
|
|
2f9a4e1618 |
25
.github/ISSUE_TEMPLATE/bug_report.md
vendored
25
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -1,25 +0,0 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
- **(1) Describe the bug 简述**
|
||||
|
||||
|
||||
- **(2) Screen Shot 截图**
|
||||
|
||||
|
||||
- **(3) Terminal Traceback 终端traceback(如有)**
|
||||
|
||||
|
||||
- **(4) Material to Help Reproduce Bugs 帮助我们复现的测试材料样本(如有)**
|
||||
|
||||
|
||||
|
||||
Before submitting an issue 提交issue之前:
|
||||
- Please try to upgrade your code. 如果您的代码不是最新的,建议您先尝试更新代码
|
||||
- Please check project wiki for common problem solutions.项目[wiki](https://github.com/binary-husky/chatgpt_academic/wiki)有一些常见问题的解决方法
|
||||
77
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
77
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
@@ -0,0 +1,77 @@
|
||||
name: Report Bug | 报告BUG
|
||||
description: "Report bug"
|
||||
title: "[Bug]: "
|
||||
labels: []
|
||||
body:
|
||||
- type: dropdown
|
||||
id: download
|
||||
attributes:
|
||||
label: Installation Method | 安装方法与平台
|
||||
options:
|
||||
- Please choose | 请选择
|
||||
- Pip Install (I ignored requirements.txt)
|
||||
- Pip Install (I used latest requirements.txt)
|
||||
- OneKeyInstall (一键安装脚本-windows)
|
||||
- OneKeyInstall (一键安装脚本-mac)
|
||||
- Anaconda (I ignored requirements.txt)
|
||||
- Anaconda (I used latest requirements.txt)
|
||||
- Docker(Windows/Mac)
|
||||
- Docker(Linux)
|
||||
- Docker-Compose(Windows/Mac)
|
||||
- Docker-Compose(Linux)
|
||||
- Huggingface
|
||||
- Others (Please Describe)
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: dropdown
|
||||
id: version
|
||||
attributes:
|
||||
label: Version | 版本
|
||||
options:
|
||||
- Please choose | 请选择
|
||||
- Latest | 最新版
|
||||
- Others | 非最新版
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: dropdown
|
||||
id: os
|
||||
attributes:
|
||||
label: OS | 操作系统
|
||||
options:
|
||||
- Please choose | 请选择
|
||||
- Windows
|
||||
- Mac
|
||||
- Linux
|
||||
- Docker
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: describe
|
||||
attributes:
|
||||
label: Describe the bug | 简述
|
||||
description: Describe the bug | 简述
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: screenshot
|
||||
attributes:
|
||||
label: Screen Shot | 有帮助的截图
|
||||
description: Screen Shot | 有帮助的截图
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: traceback
|
||||
attributes:
|
||||
label: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback(如有) + 帮助我们复现的测试材料样本(如有)
|
||||
description: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback(如有) + 帮助我们复现的测试材料样本(如有)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
10
.github/ISSUE_TEMPLATE/feature_request.md
vendored
10
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -1,10 +0,0 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
|
||||
28
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
28
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
@@ -0,0 +1,28 @@
|
||||
name: Feature Request | 功能请求
|
||||
description: "Feature Request"
|
||||
title: "[Feature]: "
|
||||
labels: []
|
||||
body:
|
||||
- type: dropdown
|
||||
id: download
|
||||
attributes:
|
||||
label: Class | 类型
|
||||
options:
|
||||
- Please choose | 请选择
|
||||
- 其他
|
||||
- 函数插件
|
||||
- 大语言模型
|
||||
- 程序主体
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: traceback
|
||||
attributes:
|
||||
label: Feature Request | 功能请求
|
||||
description: Feature Request | 功能请求
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
44
.github/workflows/build-with-audio-assistant.yml
vendored
Normal file
44
.github/workflows/build-with-audio-assistant.yml
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||
name: build-with-audio-assistant
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'master'
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}_audio_assistant
|
||||
|
||||
jobs:
|
||||
build-and-push-image:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata (tags, labels) for Docker
|
||||
id: meta
|
||||
uses: docker/metadata-action@v4
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
file: docs/GithubAction+NoLocal+AudioAssistant
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
44
.github/workflows/build-with-chatglm.yml
vendored
Normal file
44
.github/workflows/build-with-chatglm.yml
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||
name: build-with-chatglm
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'master'
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}_chatglm_moss
|
||||
|
||||
jobs:
|
||||
build-and-push-image:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata (tags, labels) for Docker
|
||||
id: meta
|
||||
uses: docker/metadata-action@v4
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
file: docs/GithubAction+ChatGLM+Moss
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
44
.github/workflows/build-with-jittorllms.yml
vendored
Normal file
44
.github/workflows/build-with-jittorllms.yml
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||
name: build-with-jittorllms
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'master'
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}_jittorllms
|
||||
|
||||
jobs:
|
||||
build-and-push-image:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata (tags, labels) for Docker
|
||||
id: meta
|
||||
uses: docker/metadata-action@v4
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
file: docs/GithubAction+JittorLLMs
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
44
.github/workflows/build-with-latex.yml
vendored
Normal file
44
.github/workflows/build-with-latex.yml
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||
name: build-with-latex
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'master'
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}_with_latex
|
||||
|
||||
jobs:
|
||||
build-and-push-image:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata (tags, labels) for Docker
|
||||
id: meta
|
||||
uses: docker/metadata-action@v4
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
file: docs/GithubAction+NoLocal+Latex
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
44
.github/workflows/build-without-local-llms.yml
vendored
Normal file
44
.github/workflows/build-without-local-llms.yml
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||
name: build-without-local-llms
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'master'
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}_nolocal
|
||||
|
||||
jobs:
|
||||
build-and-push-image:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata (tags, labels) for Docker
|
||||
id: meta
|
||||
uses: docker/metadata-action@v4
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
file: docs/GithubAction+NoLocal
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
25
.github/workflows/stale.yml
vendored
Normal file
25
.github/workflows/stale.yml
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
# This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time.
|
||||
#
|
||||
# You can adjust the behavior by modifying this file.
|
||||
# For more information, see:
|
||||
# https://github.com/actions/stale
|
||||
|
||||
name: 'Close stale issues and PRs'
|
||||
on:
|
||||
schedule:
|
||||
- cron: '*/5 * * * *'
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
issues: write
|
||||
pull-requests: read
|
||||
|
||||
steps:
|
||||
- uses: actions/stale@v8
|
||||
with:
|
||||
stale-issue-message: 'This issue is stale because it has been open 100 days with no activity. Remove stale label or comment or this will be closed in 1 days.'
|
||||
days-before-stale: 100
|
||||
days-before-close: 1
|
||||
debug-only: true
|
||||
7
.gitignore
vendored
7
.gitignore
vendored
@@ -145,3 +145,10 @@ cradle*
|
||||
debug*
|
||||
private*
|
||||
crazy_functions/test_project/pdf_and_word
|
||||
crazy_functions/test_samples
|
||||
request_llm/jittorllms
|
||||
multi-language
|
||||
request_llm/moss
|
||||
media
|
||||
flagged
|
||||
request_llm/ChatGLM-6b-onnx-u8s8
|
||||
|
||||
26
Dockerfile
26
Dockerfile
@@ -1,20 +1,34 @@
|
||||
# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
|
||||
# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic .
|
||||
# 如何运行: docker run --rm -it --net=host gpt-academic
|
||||
# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型或者latex运行依赖,请参考 docker-compose.yml
|
||||
# 如何构建: 先修改 `config.py`, 然后 `docker build -t gpt-academic . `
|
||||
# 如何运行(Linux下): `docker run --rm -it --net=host gpt-academic `
|
||||
# 如何运行(其他操作系统,选择任意一个固定端口50923): `docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic `
|
||||
FROM python:3.11
|
||||
|
||||
|
||||
# 非必要步骤,更换pip源
|
||||
RUN echo '[global]' > /etc/pip.conf && \
|
||||
echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
|
||||
echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
|
||||
|
||||
|
||||
# 进入工作路径
|
||||
WORKDIR /gpt
|
||||
COPY requirements.txt .
|
||||
|
||||
|
||||
# 安装大部分依赖,利用Docker缓存加速以后的构建
|
||||
COPY requirements.txt ./
|
||||
COPY ./docs/gradio-3.32.2-py3-none-any.whl ./docs/gradio-3.32.2-py3-none-any.whl
|
||||
RUN pip3 install -r requirements.txt
|
||||
|
||||
COPY . .
|
||||
|
||||
# 可选步骤,用于预热模块
|
||||
# 装载项目文件,安装剩余依赖
|
||||
COPY . .
|
||||
RUN pip3 install -r requirements.txt
|
||||
|
||||
|
||||
# 非必要步骤,用于预热模块
|
||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
|
||||
|
||||
# 启动
|
||||
CMD ["python3", "-u", "main.py"]
|
||||
|
||||
356
README.md
356
README.md
@@ -1,46 +1,61 @@
|
||||
> **Note**
|
||||
>
|
||||
> 2023.7.8: Gradio, Pydantic依赖调整,已修改 `requirements.txt`。请及时**更新代码**,安装依赖时,请严格选择`requirements.txt`中**指定的版本**
|
||||
>
|
||||
> `pip install -r requirements.txt`
|
||||
|
||||
|
||||
# <img src="docs/logo.png" width="40" > ChatGPT 学术优化
|
||||
# <div align=center><img src="docs/logo.png" width="40"> GPT 学术优化 (GPT Academic)</div>
|
||||
|
||||
**如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发issue或者pull requests**
|
||||
**如果喜欢这个项目,请给它一个Star;如果您发明了好用的快捷键或函数插件,欢迎发pull requests!**
|
||||
|
||||
If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
|
||||
If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
|
||||
To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> 1.请注意只有**红颜色**标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR!
|
||||
> 1.请注意只有 **高亮** 标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。
|
||||
>
|
||||
> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。
|
||||
> 2.本项目中每个文件的功能都在[自译解报告`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题[`wiki`](https://github.com/binary-husky/gpt_academic/wiki)。[安装方法](#installation) | [配置说明](https://github.com/binary-husky/gpt_academic/wiki/%E9%A1%B9%E7%9B%AE%E9%85%8D%E7%BD%AE%E8%AF%B4%E6%98%8E)。
|
||||
>
|
||||
> 3.已支持OpenAI和API2D的api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,api2d-key3"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
|
||||
> 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM和Moss等等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
|
||||
|
||||
|
||||
|
||||
|
||||
<div align="center">
|
||||
|
||||
功能 | 描述
|
||||
功能(⭐= 近期新增功能) | 描述
|
||||
--- | ---
|
||||
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, [通义千问](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
|
||||
一键润色 | 支持一键润色、一键查找论文语法错误
|
||||
一键中英互译 | 一键中英互译
|
||||
一键代码解释 | 可以正确显示代码、解释代码
|
||||
一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释
|
||||
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
||||
[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持配置代理服务器
|
||||
模块化设计 | 支持自定义高阶的函数插件与[函数插件],插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
|
||||
模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
|
||||
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
|
||||
读论文 | [函数插件] 一键解读latex论文全文并生成摘要
|
||||
Latex全文翻译、润色 | [函数插件] 一键翻译或润色latex论文
|
||||
读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [函数插件] 一键解读latex/pdf论文全文并生成摘要
|
||||
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文
|
||||
批量注释生成 | [函数插件] 一键批量生成函数注释
|
||||
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?
|
||||
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
|
||||
Markdown中英互译 | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
|
||||
[arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
||||
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
||||
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你选择有趣的文章
|
||||
公式/图片/表格显示 | 可以同时显示公式的tex形式和渲染形式,支持公式、代码高亮
|
||||
多线程函数插件支持 | 支持多线调用chatgpt,一键处理海量文本或程序
|
||||
启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
|
||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
|
||||
huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
|
||||
…… | ……
|
||||
|
||||
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
||||
Latex论文一键校对 | [函数插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
|
||||
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||
互联网信息聚合+GPT | [函数插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
|
||||
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [函数插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
|
||||
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [函数插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
|
||||
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
||||
多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
|
||||
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)同时伺候的感觉一定会很不错吧?
|
||||
⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型,提供ChatGLM2微调辅助插件
|
||||
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
||||
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
|
||||
⭐虚空终端插件 | 用自然语言,直接调度本项目其他插件
|
||||
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
||||
</div>
|
||||
|
||||
|
||||
@@ -75,118 +90,128 @@ huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/
|
||||
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
||||
</div>
|
||||
|
||||
多种大语言模型混合调用[huggingface测试版](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta)(huggingface版不支持chatglm)
|
||||
|
||||
|
||||
---
|
||||
|
||||
## 安装-方法1:直接运行 (Windows, Linux or MacOS)
|
||||
# Installation
|
||||
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
||||
|
||||
1. 下载项目
|
||||
```sh
|
||||
git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
cd chatgpt_academic
|
||||
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
```
|
||||
|
||||
2. 配置API_KEY和代理设置
|
||||
2. 配置API_KEY
|
||||
|
||||
在`config.py`中,配置 海外Proxy 和 OpenAI API KEY,说明如下
|
||||
```
|
||||
1. 如果你在国内,需要设置海外代理才能够顺利使用OpenAI API,设置方法请仔细阅读config.py(1.修改其中的USE_PROXY为True; 2.按照说明修改其中的proxies)。
|
||||
2. 配置 OpenAI API KEY。支持任意数量的OpenAI的密钥和API2D的密钥共存/负载均衡,多个KEY用英文逗号分隔即可,例如输入 API_KEY="OpenAI密钥1,API2D密钥2,OpenAI密钥3,OpenAI密钥4"
|
||||
3. 与代理网络有关的issue(网络超时、代理不起作用)汇总到 https://github.com/binary-husky/chatgpt_academic/issues/1
|
||||
```
|
||||
(P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。)
|
||||
在`config.py`中,配置API KEY等设置,[点击查看特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1) 。
|
||||
|
||||
(P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中(仅复制您修改过的配置条目即可)。`config_private.py`不受git管控,可以让您的隐私信息更加安全。P.S.项目同样支持通过`环境变量`配置大多数选项,环境变量的书写格式参考`docker-compose`文件。读取优先级: `环境变量` > `config_private.py` > `config.py`)
|
||||
|
||||
|
||||
3. 安装依赖
|
||||
```sh
|
||||
# (选择I: 如熟悉python)推荐
|
||||
# (选择I: 如熟悉python)(python版本3.9以上,越新越好),备注:使用官方pip源或者阿里pip源,临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
# 备注:使用官方pip源或者阿里pip源,其他pip源(如一些大学的pip)有可能出问题,临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
|
||||
# (选择II: 如不熟悉python)使用anaconda,步骤也是类似的:
|
||||
# (II-1)conda create -n gptac_venv python=3.11
|
||||
# (II-2)conda activate gptac_venv
|
||||
# (II-3)python -m pip install -r requirements.txt
|
||||
# (选择II: 如不熟悉python)使用anaconda,步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # 创建anaconda环境
|
||||
conda activate gptac_venv # 激活anaconda环境
|
||||
python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤
|
||||
```
|
||||
|
||||
如果需要支持清华ChatGLM后端,需要额外安装更多依赖(前提条件:熟悉python + 电脑配置够强):
|
||||
|
||||
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
|
||||
<p>
|
||||
|
||||
【可选步骤】如果需要支持清华ChatGLM2/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
|
||||
```sh
|
||||
# 【可选步骤I】支持清华ChatGLM2。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
||||
python -m pip install -r request_llm/requirements_chatglm.txt
|
||||
|
||||
# 【可选步骤II】支持复旦MOSS
|
||||
python -m pip install -r request_llm/requirements_moss.txt
|
||||
git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llm/moss # 注意执行此行代码时,必须处于项目根路径
|
||||
|
||||
# 【可选步骤III】支持RWKV Runner
|
||||
参考wiki:https://github.com/binary-husky/gpt_academic/wiki/%E9%80%82%E9%85%8DRWKV-Runner
|
||||
|
||||
# 【可选步骤IV】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案):
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
```
|
||||
|
||||
</p>
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
4. 运行
|
||||
```sh
|
||||
python main.py
|
||||
```
|
||||
|
||||
5. 测试函数插件
|
||||
```
|
||||
- 测试Python项目分析
|
||||
(选择1)input区域 输入 `./crazy_functions/test_project/python/dqn` , 然后点击 "解析整个Python项目"
|
||||
(选择2)展开文件上传区,将python文件/包含python文件的压缩包拖拽进去,在出现反馈提示后, 然后点击 "解析整个Python项目"
|
||||
- 测试自我代码解读(本项目自译解)
|
||||
点击 "[多线程Demo] 解析此项目本身(源码自译解)"
|
||||
- 测试函数插件模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能
|
||||
点击 "[函数插件模板Demo] 历史上的今天"
|
||||
- 函数插件区下拉菜单中有更多功能可供选择
|
||||
```
|
||||
### 安装方法II:使用Docker
|
||||
|
||||
## 安装-方法2:使用Docker
|
||||
|
||||
1. 仅ChatGPT(推荐大多数人选择)
|
||||
1. 仅ChatGPT(推荐大多数人选择,等价于docker-compose方案1)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
|
||||
|
||||
``` sh
|
||||
# 下载项目
|
||||
git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
cd chatgpt_academic
|
||||
# 配置 “海外Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等
|
||||
用任意文本编辑器编辑 config.py
|
||||
# 安装
|
||||
docker build -t gpt-academic .
|
||||
#(最后一步-选择1)在Linux环境下,用`--net=host`更方便快捷
|
||||
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git # 下载项目
|
||||
cd gpt_academic # 进入路径
|
||||
nano config.py # 用任意文本编辑器编辑config.py, 配置 “Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等
|
||||
docker build -t gpt-academic . # 安装
|
||||
|
||||
#(最后一步-Linux操作系统)用`--net=host`更方便快捷
|
||||
docker run --rm -it --net=host gpt-academic
|
||||
#(最后一步-选择2)在macOS/windows环境下,只能用-p选项将容器上的端口(例如50923)暴露给主机上的端口
|
||||
docker run --rm -it -p 50923:50923 gpt-academic
|
||||
#(最后一步-MacOS/Windows操作系统)只能用-p选项将容器上的端口(例如50923)暴露给主机上的端口
|
||||
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
||||
```
|
||||
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用docker-compose获取Latex功能(修改docker-compose.yml,保留方案4并删除其他方案)。
|
||||
|
||||
2. ChatGPT+ChatGLM(需要对Docker熟悉 + 读懂Dockerfile + 电脑配置够强)
|
||||
2. ChatGPT + ChatGLM2 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)
|
||||
|
||||
``` sh
|
||||
# 修改Dockerfile
|
||||
cd docs && nano Dockerfile+ChatGLM
|
||||
# 构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs)
|
||||
docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
|
||||
# 运行 (1) 直接运行:
|
||||
docker run --rm -it --net=host --gpus=all gpt-academic
|
||||
# 运行 (2) 我想运行之前进容器做一些调整:
|
||||
docker run --rm -it --net=host --gpus=all gpt-academic bash
|
||||
# 修改docker-compose.yml,保留方案2并删除其他方案。修改docker-compose.yml中方案2的配置,参考其中注释即可
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
3. ChatGPT + LLAMA + 盘古 + RWKV(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-jittorllms.yml)
|
||||
|
||||
``` sh
|
||||
# 修改docker-compose.yml,保留方案3并删除其他方案。修改docker-compose.yml中方案3的配置,参考其中注释即可
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
|
||||
## 安装-方法3:其他部署方式(需要云服务器知识与经验)
|
||||
### 安装方法III:其他部署姿势
|
||||
1. 一键运行脚本。
|
||||
完全不熟悉python环境的Windows用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。
|
||||
脚本的贡献来源是[oobabooga](https://github.com/oobabooga/one-click-installers)。
|
||||
|
||||
1. 远程云服务器部署
|
||||
请访问[部署wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
2. 使用docker-compose运行。
|
||||
请阅读docker-compose.yml后,按照其中的提示操作即可
|
||||
|
||||
2. 使用WSL2(Windows Subsystem for Linux 子系统)
|
||||
请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
3. 如何使用反代URL
|
||||
按照`config.py`中的说明配置API_URL_REDIRECT即可。
|
||||
|
||||
4. 微软云AzureAPI
|
||||
按照`config.py`中的说明配置即可(AZURE_ENDPOINT等四个配置)
|
||||
|
||||
5. 远程云服务器部署(需要云服务器知识与经验)。
|
||||
请访问[部署wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
|
||||
6. 使用Sealos[一键部署](https://github.com/binary-husky/gpt_academic/issues/993)。
|
||||
|
||||
7. 使用WSL2(Windows Subsystem for Linux 子系统)。
|
||||
请访问[部署wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
|
||||
8. 如何在二级网址(如`http://localhost/subpath`)下运行。
|
||||
请访问[FastAPI运行说明](docs/WithFastapi.md)
|
||||
|
||||
|
||||
## 安装-代理配置
|
||||
1. 常规方法
|
||||
[配置代理](https://github.com/binary-husky/chatgpt_academic/issues/1)
|
||||
|
||||
2. 纯新手教程
|
||||
[纯新手教程](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
|
||||
|
||||
|
||||
---
|
||||
|
||||
## 自定义新的便捷按钮 / 自定义函数插件
|
||||
|
||||
1. 自定义新的便捷按钮(学术快捷键)
|
||||
# Advanced Usage
|
||||
### I:自定义新的便捷按钮(学术快捷键)
|
||||
任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序即可。(如果按钮已经添加成功并可见,那么前缀、后缀都支持热修改,无需重启程序即可生效。)
|
||||
例如
|
||||
```
|
||||
@@ -202,78 +227,94 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. 自定义函数插件
|
||||
### II:自定义函数插件
|
||||
|
||||
编写强大的函数插件来执行任何你想得到的和想不到的任务。
|
||||
本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。
|
||||
详情请参考[函数插件指南](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
|
||||
详情请参考[函数插件指南](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
|
||||
|
||||
|
||||
---
|
||||
|
||||
|
||||
## 部分功能展示
|
||||
|
||||
1. 图片显示:
|
||||
# Latest Update
|
||||
### I:新功能动态
|
||||
|
||||
1. 对话保存功能。在函数插件区调用 `保存当前的对话` 即可将当前对话保存为可读+可复原的html文件,
|
||||
另外在函数插件区(下拉菜单)调用 `载入对话历史存档` ,即可还原之前的会话。
|
||||
Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史html存档缓存。
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. 本项目的代码自译解(如果一个程序能够读懂并剖析自己):
|
||||
|
||||
2. ⭐Latex/Arxiv论文翻译功能⭐
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/002a1a75-ace0-4e6a-94e2-ec1406a746f1" height="250" > ===>
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/9fdcc391-f823-464f-9322-f8719677043b" height="250" >
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
|
||||
</div>
|
||||
3. 虚空终端(从自然语言输入中,理解用户意图+自动调用其他插件)
|
||||
|
||||
3. 其他任意Python/Cpp/Java/Go/Rect/...项目剖析:
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
|
||||
</div>
|
||||
- 步骤一:输入 “ 请调用插件翻译PDF论文,地址为https://www.nature.com/articles/s41586-019-1724-z.pdf ”
|
||||
- 步骤二:点击“虚空终端”
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/66f1b044-e9ff-4eed-9126-5d4f3668f1ed" width="500" >
|
||||
</div>
|
||||
|
||||
4. Latex论文一键阅读理解与摘要生成
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
|
||||
</div>
|
||||
|
||||
5. 自动报告生成
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
|
||||
</div>
|
||||
|
||||
6. 模块化功能设计
|
||||
4. 模块化功能设计,简单的接口却能支持强大的功能
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
|
||||
</div>
|
||||
|
||||
|
||||
7. 源代码转译英文
|
||||
|
||||
5. 译解其他开源项目
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" height="250" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" height="250" >
|
||||
</div>
|
||||
|
||||
8. 互联网在线信息综合
|
||||
|
||||
6. 装饰[live2d](https://github.com/fghrsh/live2d_demo)的小功能(默认关闭,需要修改`config.py`)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/233575247-fb00819e-6d1b-4bb7-bd54-1d7528f03dd9.png" width="800" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500" >
|
||||
</div>
|
||||
|
||||
7. 新增MOSS大语言模型支持
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500" >
|
||||
</div>
|
||||
|
||||
8. OpenAI图像生成
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
|
||||
</div>
|
||||
|
||||
9. OpenAI音频解析与总结
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
|
||||
</div>
|
||||
|
||||
10. Latex全文校对纠错
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" height="200" > ===>
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/476f66d9-7716-4537-b5c1-735372c25adb" height="200">
|
||||
</div>
|
||||
|
||||
11. 语言、主题切换
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/b6799499-b6fb-4f0c-9c8e-1b441872f4e8" width="500" >
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
## Todo 与 版本规划:
|
||||
- version 3.2+ (todo): 函数插件支持更多参数接口
|
||||
### II:版本:
|
||||
- version 3.60(todo): 优化虚空终端,引入code interpreter和更多插件
|
||||
- version 3.50: 使用自然语言调用本项目的所有函数插件(虚空终端),支持插件分类,改进UI,设计新主题
|
||||
- version 3.49: 支持百度千帆平台和文心一言
|
||||
- version 3.48: 支持阿里达摩院通义千问,上海AI-Lab书生,讯飞星火
|
||||
- version 3.46: 支持完全脱手操作的实时语音对话
|
||||
- version 3.45: 支持自定义ChatGLM2微调模型
|
||||
- version 3.44: 正式支持Azure,优化界面易用性
|
||||
- version 3.4: +arxiv论文翻译、latex论文批改功能
|
||||
- version 3.3: +互联网信息综合功能
|
||||
- version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合)
|
||||
- version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡
|
||||
- version 3.0: 对chatglm和其他小型llm的支持
|
||||
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
|
||||
@@ -285,16 +326,41 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
|
||||
- version 2.0: 引入模块化函数插件
|
||||
- version 1.0: 基础功能
|
||||
|
||||
chatgpt_academic开发者QQ群:734063350
|
||||
gpt_academic开发者QQ群-2:610599535
|
||||
|
||||
## 参考与学习
|
||||
- 已知问题
|
||||
- 某些浏览器翻译插件干扰此软件前端的运行
|
||||
- 官方Gradio目前有很多兼容性Bug,请务必使用`requirement.txt`安装Gradio
|
||||
|
||||
### III:主题
|
||||
可以通过修改`THEME`选项(config.py)变更主题
|
||||
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)
|
||||
|
||||
|
||||
### IV:参考与学习
|
||||
|
||||
```
|
||||
代码中参考了很多其他优秀项目中的设计,主要包括:
|
||||
代码中参考了很多其他优秀项目中的设计,顺序不分先后:
|
||||
|
||||
# 借鉴项目1:借鉴了ChuanhuChatGPT中诸多技巧
|
||||
# 清华ChatGLM2-6B:
|
||||
https://github.com/THUDM/ChatGLM2-6B
|
||||
|
||||
# 清华JittorLLMs:
|
||||
https://github.com/Jittor/JittorLLMs
|
||||
|
||||
# ChatPaper:
|
||||
https://github.com/kaixindelele/ChatPaper
|
||||
|
||||
# Edge-GPT:
|
||||
https://github.com/acheong08/EdgeGPT
|
||||
|
||||
# ChuanhuChatGPT:
|
||||
https://github.com/GaiZhenbiao/ChuanhuChatGPT
|
||||
|
||||
# 借鉴项目2:清华ChatGLM-6B:
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
# Oobabooga one-click installer:
|
||||
https://github.com/oobabooga/one-click-installers
|
||||
|
||||
# More:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
```
|
||||
|
||||
@@ -3,15 +3,20 @@ def check_proxy(proxies):
|
||||
import requests
|
||||
proxies_https = proxies['https'] if proxies is not None else '无'
|
||||
try:
|
||||
response = requests.get("https://ipapi.co/json/",
|
||||
proxies=proxies, timeout=4)
|
||||
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
|
||||
data = response.json()
|
||||
print(f'查询代理的地理位置,返回的结果是{data}')
|
||||
if 'country_name' in data:
|
||||
country = data['country_name']
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
|
||||
elif 'error' in data:
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限"
|
||||
alternative = _check_with_backup_source(proxies)
|
||||
if alternative is None:
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限"
|
||||
else:
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:{alternative}"
|
||||
else:
|
||||
result = f"代理配置 {proxies_https}, 代理数据解析失败:{data}"
|
||||
print(result)
|
||||
return result
|
||||
except:
|
||||
@@ -19,6 +24,11 @@ def check_proxy(proxies):
|
||||
print(result)
|
||||
return result
|
||||
|
||||
def _check_with_backup_source(proxies):
|
||||
import random, string, requests
|
||||
random_string = ''.join(random.choices(string.ascii_letters + string.digits, k=32))
|
||||
try: return requests.get(f"http://{random_string}.edns.ip-api.com/json", proxies=proxies, timeout=4).json()['dns']['geo']
|
||||
except: return None
|
||||
|
||||
def backup_and_download(current_version, remote_version):
|
||||
"""
|
||||
@@ -56,22 +66,24 @@ def patch_and_restart(path):
|
||||
"""
|
||||
一键更新协议:覆盖和重启
|
||||
"""
|
||||
import distutils
|
||||
from distutils import dir_util
|
||||
import shutil
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import glob
|
||||
from colorful import print亮黄, print亮绿, print亮红
|
||||
# if not using config_private, move origin config.py as config_private.py
|
||||
if not os.path.exists('config_private.py'):
|
||||
print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
|
||||
'另外您可以随时在history子文件夹下找回旧版的程序。')
|
||||
shutil.copyfile('config.py', 'config_private.py')
|
||||
distutils.dir_util.copy_tree(path+'/chatgpt_academic-master', './')
|
||||
import subprocess
|
||||
path_new_version = glob.glob(path + '/*-master')[0]
|
||||
dir_util.copy_tree(path_new_version, './')
|
||||
print亮绿('代码已经更新,即将更新pip包依赖……')
|
||||
for i in reversed(range(5)): time.sleep(1); print(i)
|
||||
try:
|
||||
import subprocess
|
||||
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt'])
|
||||
except:
|
||||
print亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
|
||||
@@ -92,7 +104,7 @@ def get_current_version():
|
||||
return current_version
|
||||
|
||||
|
||||
def auto_update():
|
||||
def auto_update(raise_error=False):
|
||||
"""
|
||||
一键更新协议:查询版本和用户意见
|
||||
"""
|
||||
@@ -113,7 +125,7 @@ def auto_update():
|
||||
with open('./version', 'r', encoding='utf8') as f:
|
||||
current_version = f.read()
|
||||
current_version = json.loads(current_version)['version']
|
||||
if (remote_version - current_version) >= 0.01:
|
||||
if (remote_version - current_version) >= 0.01-1e-5:
|
||||
from colorful import print亮黄
|
||||
print亮黄(
|
||||
f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}')
|
||||
@@ -124,14 +136,22 @@ def auto_update():
|
||||
try:
|
||||
patch_and_restart(path)
|
||||
except:
|
||||
print('更新失败。')
|
||||
msg = '更新失败。'
|
||||
if raise_error:
|
||||
from toolbox import trimmed_format_exc
|
||||
msg += trimmed_format_exc()
|
||||
print(msg)
|
||||
else:
|
||||
print('自动更新程序:已禁用')
|
||||
return
|
||||
else:
|
||||
return
|
||||
except:
|
||||
print('自动更新程序:已禁用')
|
||||
msg = '自动更新程序:已禁用。建议排查:代理网络配置。'
|
||||
if raise_error:
|
||||
from toolbox import trimmed_format_exc
|
||||
msg += trimmed_format_exc()
|
||||
print(msg)
|
||||
|
||||
def warm_up_modules():
|
||||
print('正在执行一些模块的预热...')
|
||||
|
||||
80
colorful.py
80
colorful.py
@@ -34,58 +34,28 @@ def print亮紫(*kw,**kargs):
|
||||
def print亮靛(*kw,**kargs):
|
||||
print("\033[1;36m",*kw,"\033[0m",**kargs)
|
||||
|
||||
|
||||
|
||||
def print亮红(*kw,**kargs):
|
||||
print("\033[1;31m",*kw,"\033[0m",**kargs)
|
||||
def print亮绿(*kw,**kargs):
|
||||
print("\033[1;32m",*kw,"\033[0m",**kargs)
|
||||
def print亮黄(*kw,**kargs):
|
||||
print("\033[1;33m",*kw,"\033[0m",**kargs)
|
||||
def print亮蓝(*kw,**kargs):
|
||||
print("\033[1;34m",*kw,"\033[0m",**kargs)
|
||||
def print亮紫(*kw,**kargs):
|
||||
print("\033[1;35m",*kw,"\033[0m",**kargs)
|
||||
def print亮靛(*kw,**kargs):
|
||||
print("\033[1;36m",*kw,"\033[0m",**kargs)
|
||||
|
||||
print_red = print红
|
||||
print_green = print绿
|
||||
print_yellow = print黄
|
||||
print_blue = print蓝
|
||||
print_purple = print紫
|
||||
print_indigo = print靛
|
||||
|
||||
print_bold_red = print亮红
|
||||
print_bold_green = print亮绿
|
||||
print_bold_yellow = print亮黄
|
||||
print_bold_blue = print亮蓝
|
||||
print_bold_purple = print亮紫
|
||||
print_bold_indigo = print亮靛
|
||||
|
||||
if not stdout.isatty():
|
||||
# redirection, avoid a fucked up log file
|
||||
print红 = print
|
||||
print绿 = print
|
||||
print黄 = print
|
||||
print蓝 = print
|
||||
print紫 = print
|
||||
print靛 = print
|
||||
print亮红 = print
|
||||
print亮绿 = print
|
||||
print亮黄 = print
|
||||
print亮蓝 = print
|
||||
print亮紫 = print
|
||||
print亮靛 = print
|
||||
print_red = print
|
||||
print_green = print
|
||||
print_yellow = print
|
||||
print_blue = print
|
||||
print_purple = print
|
||||
print_indigo = print
|
||||
print_bold_red = print
|
||||
print_bold_green = print
|
||||
print_bold_yellow = print
|
||||
print_bold_blue = print
|
||||
print_bold_purple = print
|
||||
print_bold_indigo = print
|
||||
# Do you like the elegance of Chinese characters?
|
||||
def sprint红(*kw):
|
||||
return "\033[0;31m"+' '.join(kw)+"\033[0m"
|
||||
def sprint绿(*kw):
|
||||
return "\033[0;32m"+' '.join(kw)+"\033[0m"
|
||||
def sprint黄(*kw):
|
||||
return "\033[0;33m"+' '.join(kw)+"\033[0m"
|
||||
def sprint蓝(*kw):
|
||||
return "\033[0;34m"+' '.join(kw)+"\033[0m"
|
||||
def sprint紫(*kw):
|
||||
return "\033[0;35m"+' '.join(kw)+"\033[0m"
|
||||
def sprint靛(*kw):
|
||||
return "\033[0;36m"+' '.join(kw)+"\033[0m"
|
||||
def sprint亮红(*kw):
|
||||
return "\033[1;31m"+' '.join(kw)+"\033[0m"
|
||||
def sprint亮绿(*kw):
|
||||
return "\033[1;32m"+' '.join(kw)+"\033[0m"
|
||||
def sprint亮黄(*kw):
|
||||
return "\033[1;33m"+' '.join(kw)+"\033[0m"
|
||||
def sprint亮蓝(*kw):
|
||||
return "\033[1;34m"+' '.join(kw)+"\033[0m"
|
||||
def sprint亮紫(*kw):
|
||||
return "\033[1;35m"+' '.join(kw)+"\033[0m"
|
||||
def sprint亮靛(*kw):
|
||||
return "\033[1;36m"+' '.join(kw)+"\033[0m"
|
||||
|
||||
228
config.py
228
config.py
@@ -1,62 +1,246 @@
|
||||
# [step 1]>> 例如: API_KEY = "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" (此key无效)
|
||||
API_KEY = "sk-此处填API密钥" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey1,fkxxxx-api2dkey2"
|
||||
"""
|
||||
以下所有配置也都支持利用环境变量覆写,环境变量配置格式见docker-compose.yml。
|
||||
读取优先级:环境变量 > config_private.py > config.py
|
||||
--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
|
||||
All the following configurations also support using environment variables to override,
|
||||
and the environment variable configuration format can be seen in docker-compose.yml.
|
||||
Configuration reading priority: environment variable > config_private.py > config.py
|
||||
"""
|
||||
|
||||
# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改
|
||||
# [step 1]>> API_KEY = "sk-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"。极少数情况下,还需要填写组织(格式如org-123456789abcdefghijklmno的),请向下翻,找 API_ORG 设置项
|
||||
API_KEY = "此处填API密钥" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4"
|
||||
|
||||
|
||||
# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改;如果使用本地或无地域限制的大模型时,此处也不需要修改
|
||||
USE_PROXY = False
|
||||
if USE_PROXY:
|
||||
# 填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改
|
||||
# 例如 "socks5h://localhost:11284"
|
||||
# [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
|
||||
# [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上)
|
||||
# [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
|
||||
|
||||
# 代理网络的地址,打开你的科学上网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284)
|
||||
"""
|
||||
填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改
|
||||
<配置教程&视频教程> https://github.com/binary-husky/gpt_academic/issues/1>
|
||||
[协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
|
||||
[地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上)
|
||||
[端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
|
||||
"""
|
||||
# 代理网络的地址,打开你的*学*网软件查看代理的协议(socks5h / http)、地址(localhost)和端口(11284)
|
||||
proxies = {
|
||||
# [协议]:// [地址] :[端口]
|
||||
"http": "socks5h://localhost:11284",
|
||||
"https": "socks5h://localhost:11284",
|
||||
"http": "socks5h://localhost:11284", # 再例如 "http": "http://127.0.0.1:7890",
|
||||
"https": "socks5h://localhost:11284", # 再例如 "https": "http://127.0.0.1:7890",
|
||||
}
|
||||
else:
|
||||
proxies = None
|
||||
|
||||
# [step 3]>> 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次
|
||||
# 一言以蔽之:免费用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview
|
||||
# ------------------------------------ 以下配置可以优化体验, 但大部分场合下并不需要修改 ------------------------------------
|
||||
|
||||
# 重新URL重新定向,实现更换API_URL的作用(高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!)
|
||||
# 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
|
||||
# 举例: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://reverse-proxy-url/v1/chat/completions"}
|
||||
API_URL_REDIRECT = {}
|
||||
|
||||
|
||||
# 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次
|
||||
# 一言以蔽之:免费(5刀)用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview
|
||||
DEFAULT_WORKER_NUM = 3
|
||||
|
||||
|
||||
# [step 4]>> 以下配置可以优化体验,但大部分场合下并不需要修改
|
||||
# 对话窗的高度
|
||||
# 色彩主题,可选 ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast"]
|
||||
THEME = "Default"
|
||||
|
||||
|
||||
# 对话窗的高度 (仅在LAYOUT="TOP-DOWN"时生效)
|
||||
CHATBOT_HEIGHT = 1115
|
||||
|
||||
|
||||
# 代码高亮
|
||||
CODE_HIGHLIGHT = True
|
||||
|
||||
|
||||
# 窗口布局
|
||||
LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)
|
||||
LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)
|
||||
DARK_MODE = True # 暗色模式 / 亮色模式
|
||||
|
||||
|
||||
# 发送请求到OpenAI后,等待多久判定为超时
|
||||
TIMEOUT_SECONDS = 30
|
||||
|
||||
|
||||
# 网页的端口, -1代表随机端口
|
||||
WEB_PORT = -1
|
||||
|
||||
|
||||
# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制
|
||||
MAX_RETRY = 2
|
||||
|
||||
# OpenAI模型选择是(gpt4现在只对申请成功的人开放,体验gpt-4可以试试api2d)
|
||||
|
||||
# 插件分类默认选项
|
||||
DEFAULT_FN_GROUPS = ['对话', '编程', '学术']
|
||||
|
||||
|
||||
# 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
|
||||
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm"]
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5", "api2d-gpt-3.5-turbo",
|
||||
"gpt-4", "api2d-gpt-4", "chatglm", "moss", "newbing", "stack-claude"]
|
||||
# P.S. 其他可用的模型还包括 ["qianfan", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613",
|
||||
# "spark", "sparkv2", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
|
||||
|
||||
# 百度千帆(LLM_MODEL="qianfan")
|
||||
BAIDU_CLOUD_API_KEY = ''
|
||||
BAIDU_CLOUD_SECRET_KEY = ''
|
||||
BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat"
|
||||
|
||||
|
||||
# 如果使用ChatGLM2微调模型,请把 LLM_MODEL="chatglmft",并在此处指定模型路径
|
||||
CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"
|
||||
|
||||
|
||||
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
|
||||
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
|
||||
LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
|
||||
|
||||
|
||||
# 设置gradio的并行线程数(不需要修改)
|
||||
CONCURRENT_COUNT = 100
|
||||
|
||||
|
||||
# 是否在提交时自动清空输入框
|
||||
AUTO_CLEAR_TXT = False
|
||||
|
||||
|
||||
# 加一个live2d装饰
|
||||
ADD_WAIFU = False
|
||||
|
||||
|
||||
# 设置用户名和密码(不需要修改)(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个)
|
||||
# [("username", "password"), ("username2", "password2"), ...]
|
||||
AUTHENTICATION = []
|
||||
|
||||
# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!)
|
||||
# 格式 {"https://api.openai.com/v1/chat/completions": "重定向的URL"}
|
||||
API_URL_REDIRECT = {}
|
||||
|
||||
# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
|
||||
CUSTOM_PATH = "/"
|
||||
|
||||
|
||||
# 极少数情况下,openai的官方KEY需要伴随组织编码(格式如org-xxxxxxxxxxxxxxxxxxxxxxxx)使用
|
||||
API_ORG = ""
|
||||
|
||||
|
||||
# 如果需要使用Slack Claude,使用教程详情见 request_llm/README.md
|
||||
SLACK_CLAUDE_BOT_ID = ''
|
||||
SLACK_CLAUDE_USER_TOKEN = ''
|
||||
|
||||
|
||||
# 如果需要使用AZURE 详情请见额外文档 docs\use_azure.md
|
||||
AZURE_ENDPOINT = "https://你亲手写的api名称.openai.azure.com/"
|
||||
AZURE_API_KEY = "填入azure openai api的密钥" # 建议直接在API_KEY处填写,该选项即将被弃用
|
||||
AZURE_ENGINE = "填入你亲手写的部署名" # 读 docs\use_azure.md
|
||||
|
||||
|
||||
# 使用Newbing
|
||||
NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"]
|
||||
NEWBING_COOKIES = """
|
||||
put your new bing cookies here
|
||||
"""
|
||||
|
||||
|
||||
# 阿里云实时语音识别 配置难度较高 仅建议高手用户使用 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
|
||||
ENABLE_AUDIO = False
|
||||
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
|
||||
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
|
||||
ALIYUN_ACCESSKEY="" # (无需填写)
|
||||
ALIYUN_SECRET="" # (无需填写)
|
||||
|
||||
|
||||
# 接入讯飞星火大模型 https://console.xfyun.cn/services/iat
|
||||
XFYUN_APPID = "00000000"
|
||||
XFYUN_API_SECRET = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
|
||||
XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
|
||||
|
||||
|
||||
# Claude API KEY
|
||||
ANTHROPIC_API_KEY = ""
|
||||
|
||||
|
||||
# 自定义API KEY格式
|
||||
CUSTOM_API_KEY_PATTERN = ""
|
||||
|
||||
|
||||
# HUGGINGFACE的TOKEN,下载LLAMA时起作用 https://huggingface.co/docs/hub/security-tokens
|
||||
HUGGINGFACE_ACCESS_TOKEN = "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"
|
||||
|
||||
|
||||
# GROBID服务器地址(填写多个可以均衡负载),用于高质量地读取PDF文档
|
||||
# 获取方法:复制以下空间https://huggingface.co/spaces/qingxu98/grobid,设为public,然后GROBID_URL = "https://(你的hf用户名如qingxu98)-(你的填写的空间名如grobid).hf.space"
|
||||
GROBID_URLS = [
|
||||
"https://qingxu98-grobid.hf.space","https://qingxu98-grobid2.hf.space","https://qingxu98-grobid3.hf.space",
|
||||
"https://shaocongma-grobid.hf.space","https://FBR123-grobid.hf.space", "https://yeku-grobid.hf.space",
|
||||
]
|
||||
|
||||
|
||||
# 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性,默认关闭
|
||||
ALLOW_RESET_CONFIG = False
|
||||
|
||||
|
||||
"""
|
||||
在线大模型配置关联关系示意图
|
||||
│
|
||||
├── "gpt-3.5-turbo" 等openai模型
|
||||
│ ├── API_KEY
|
||||
│ ├── CUSTOM_API_KEY_PATTERN(不常用)
|
||||
│ ├── API_ORG(不常用)
|
||||
│ └── API_URL_REDIRECT(不常用)
|
||||
│
|
||||
├── "azure-gpt-3.5" 等azure模型
|
||||
│ ├── API_KEY
|
||||
│ ├── AZURE_ENDPOINT
|
||||
│ ├── AZURE_API_KEY
|
||||
│ ├── AZURE_ENGINE
|
||||
│ └── API_URL_REDIRECT
|
||||
│
|
||||
├── "spark" 星火认知大模型 spark & sparkv2
|
||||
│ ├── XFYUN_APPID
|
||||
│ ├── XFYUN_API_SECRET
|
||||
│ └── XFYUN_API_KEY
|
||||
│
|
||||
├── "claude-1-100k" 等claude模型
|
||||
│ └── ANTHROPIC_API_KEY
|
||||
│
|
||||
├── "stack-claude"
|
||||
│ ├── SLACK_CLAUDE_BOT_ID
|
||||
│ └── SLACK_CLAUDE_USER_TOKEN
|
||||
│
|
||||
├── "qianfan" 百度千帆大模型库
|
||||
│ ├── BAIDU_CLOUD_QIANFAN_MODEL
|
||||
│ ├── BAIDU_CLOUD_API_KEY
|
||||
│ └── BAIDU_CLOUD_SECRET_KEY
|
||||
│
|
||||
├── "newbing" Newbing接口不再稳定,不推荐使用
|
||||
├── NEWBING_STYLE
|
||||
└── NEWBING_COOKIES
|
||||
|
||||
|
||||
用户图形界面布局依赖关系示意图
|
||||
│
|
||||
├── CHATBOT_HEIGHT 对话窗的高度
|
||||
├── CODE_HIGHLIGHT 代码高亮
|
||||
├── LAYOUT 窗口布局
|
||||
├── DARK_MODE 暗色模式 / 亮色模式
|
||||
├── DEFAULT_FN_GROUPS 插件分类默认选项
|
||||
├── THEME 色彩主题
|
||||
├── AUTO_CLEAR_TXT 是否在提交时自动清空输入框
|
||||
├── ADD_WAIFU 加一个live2d装饰
|
||||
├── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性
|
||||
|
||||
|
||||
插件在线服务配置依赖关系示意图
|
||||
│
|
||||
├── 语音功能
|
||||
│ ├── ENABLE_AUDIO
|
||||
│ ├── ALIYUN_TOKEN
|
||||
│ ├── ALIYUN_APPKEY
|
||||
│ ├── ALIYUN_ACCESSKEY
|
||||
│ └── ALIYUN_SECRET
|
||||
│
|
||||
├── PDF文档精准解析
|
||||
│ └── GROBID_URLS
|
||||
|
||||
"""
|
||||
|
||||
@@ -1,20 +1,25 @@
|
||||
# 'primary' 颜色对应 theme.py 中的 primary_hue
|
||||
# 'secondary' 颜色对应 theme.py 中的 neutral_hue
|
||||
# 'stop' 颜色对应 theme.py 中的 color_er
|
||||
# 默认按钮颜色是 secondary
|
||||
import importlib
|
||||
from toolbox import clear_line_break
|
||||
|
||||
|
||||
def get_core_functions():
|
||||
return {
|
||||
"英语学术润色": {
|
||||
# 前言
|
||||
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
||||
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
|
||||
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
|
||||
r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
|
||||
# 后语
|
||||
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
||||
"Suffix": r"",
|
||||
"Color": r"secondary", # 按钮颜色
|
||||
# 按钮颜色 (默认 secondary)
|
||||
"Color": r"secondary",
|
||||
# 按钮是否可见 (默认 True,即可见)
|
||||
"Visible": True,
|
||||
# 是否在触发时清除历史 (默认 False,即不处理之前的对话历史)
|
||||
"AutoClearHistory": False
|
||||
},
|
||||
"中文学术润色": {
|
||||
"Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
|
||||
@@ -58,14 +63,34 @@ def get_core_functions():
|
||||
"英译中": {
|
||||
"Prefix": r"翻译成地道的中文:" + "\n\n",
|
||||
"Suffix": r"",
|
||||
"Visible": False,
|
||||
},
|
||||
"找图片": {
|
||||
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
|
||||
r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
|
||||
"Suffix": r"",
|
||||
"Visible": False,
|
||||
},
|
||||
"解释代码": {
|
||||
"Prefix": r"请解释以下代码:" + "\n```\n",
|
||||
"Suffix": "\n```\n",
|
||||
},
|
||||
"参考文献转Bib": {
|
||||
"Prefix": r"Here are some bibliography items, please transform them into bibtex style." +
|
||||
r"Note that, reference styles maybe more than one kind, you should transform each item correctly." +
|
||||
r"Items need to be transformed:",
|
||||
"Visible": False,
|
||||
"Suffix": r"",
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def handle_core_functionality(additional_fn, inputs, history, chatbot):
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
||||
if core_functional[additional_fn].get("AutoClearHistory", False):
|
||||
history = []
|
||||
return inputs, history
|
||||
|
||||
@@ -2,7 +2,6 @@ from toolbox import HotReload # HotReload 的意思是热更新,修改函数
|
||||
|
||||
|
||||
def get_crazy_functions():
|
||||
###################### 第一组插件 ###########################
|
||||
from crazy_functions.读文章写摘要 import 读文章写摘要
|
||||
from crazy_functions.生成函数注释 import 批量生成函数注释
|
||||
from crazy_functions.解析项目源代码 import 解析项目本身
|
||||
@@ -10,8 +9,9 @@ def get_crazy_functions():
|
||||
from crazy_functions.解析项目源代码 import 解析一个C项目的头文件
|
||||
from crazy_functions.解析项目源代码 import 解析一个C项目
|
||||
from crazy_functions.解析项目源代码 import 解析一个Golang项目
|
||||
from crazy_functions.解析项目源代码 import 解析一个Rust项目
|
||||
from crazy_functions.解析项目源代码 import 解析一个Java项目
|
||||
from crazy_functions.解析项目源代码 import 解析一个Rect项目
|
||||
from crazy_functions.解析项目源代码 import 解析一个前端项目
|
||||
from crazy_functions.高级功能函数模板 import 高阶功能模板函数
|
||||
from crazy_functions.代码重写为全英文_多线程 import 全项目切换英文
|
||||
from crazy_functions.Latex全文润色 import Latex英文润色
|
||||
@@ -19,177 +19,520 @@ def get_crazy_functions():
|
||||
from crazy_functions.解析项目源代码 import 解析一个Lua项目
|
||||
from crazy_functions.解析项目源代码 import 解析一个CSharp项目
|
||||
from crazy_functions.总结word文档 import 总结word文档
|
||||
function_plugins = {
|
||||
|
||||
"解析整个Python项目": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"Function": HotReload(解析一个Python项目)
|
||||
},
|
||||
"批量总结Word文档": {
|
||||
"Color": "stop",
|
||||
"Function": HotReload(总结word文档)
|
||||
},
|
||||
"解析整个C++项目头文件": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个C项目的头文件)
|
||||
},
|
||||
"解析整个C++项目(.cpp/.hpp/.c/.h)": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个C项目)
|
||||
},
|
||||
"解析整个Go项目": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个Golang项目)
|
||||
},
|
||||
"解析整个Java项目": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个Java项目)
|
||||
},
|
||||
"解析整个React项目": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个Rect项目)
|
||||
},
|
||||
"解析整个Lua项目": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个Lua项目)
|
||||
},
|
||||
"解析整个CSharp项目": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(解析一个CSharp项目)
|
||||
},
|
||||
"读Tex论文写摘要": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"Function": HotReload(读文章写摘要)
|
||||
},
|
||||
"批量生成函数注释": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"Function": HotReload(批量生成函数注释)
|
||||
},
|
||||
"[多线程Demo] 解析此项目本身(源码自译解)": {
|
||||
"Function": HotReload(解析项目本身)
|
||||
},
|
||||
"[多线程demo] 把本项目源代码切换成全英文": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(全项目切换英文)
|
||||
},
|
||||
"[函数插件模板Demo] 历史上的今天": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Function": HotReload(高阶功能模板函数)
|
||||
},
|
||||
|
||||
}
|
||||
###################### 第二组插件 ###########################
|
||||
# [第二组插件]: 经过充分测试
|
||||
from crazy_functions.解析JupyterNotebook import 解析ipynb文件
|
||||
from crazy_functions.对话历史存档 import 对话历史存档
|
||||
from crazy_functions.对话历史存档 import 载入对话历史存档
|
||||
from crazy_functions.对话历史存档 import 删除所有本地对话历史记录
|
||||
from crazy_functions.辅助功能 import 清除缓存
|
||||
from crazy_functions.批量Markdown翻译 import Markdown英译中
|
||||
from crazy_functions.批量总结PDF文档 import 批量总结PDF文档
|
||||
from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer
|
||||
from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档
|
||||
from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
|
||||
from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入
|
||||
from crazy_functions.Latex全文润色 import Latex中文润色
|
||||
from crazy_functions.Latex全文润色 import Latex英文纠错
|
||||
from crazy_functions.Latex全文翻译 import Latex中译英
|
||||
from crazy_functions.Latex全文翻译 import Latex英译中
|
||||
from crazy_functions.批量Markdown翻译 import Markdown中译英
|
||||
from crazy_functions.批量Markdown翻译 import Markdown英译中
|
||||
from crazy_functions.虚空终端 import 虚空终端
|
||||
|
||||
function_plugins.update({
|
||||
"批量翻译PDF文档(多线程)": {
|
||||
|
||||
function_plugins = {
|
||||
"虚空终端": {
|
||||
"Group": "对话|编程|学术",
|
||||
"Color": "stop",
|
||||
"AsButton": True, # 加入下拉菜单中
|
||||
"AsButton": True,
|
||||
"Function": HotReload(虚空终端)
|
||||
},
|
||||
"解析整个Python项目": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Info": "解析一个Python项目的所有源文件(.py) | 输入参数为路径",
|
||||
"Function": HotReload(解析一个Python项目)
|
||||
},
|
||||
"载入对话历史存档(先上传存档或输入路径)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Info": "载入对话历史存档 | 输入参数为路径",
|
||||
"Function": HotReload(载入对话历史存档)
|
||||
},
|
||||
"删除所有本地对话历史记录(谨慎操作)": {
|
||||
"Group": "对话",
|
||||
"AsButton": False,
|
||||
"Info": "删除所有本地对话历史记录,谨慎操作 | 不需要输入参数",
|
||||
"Function": HotReload(删除所有本地对话历史记录)
|
||||
},
|
||||
"清除所有缓存文件(谨慎操作)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
|
||||
"Function": HotReload(清除缓存)
|
||||
},
|
||||
"批量总结Word文档": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Info": "批量总结word文档 | 输入参数为路径",
|
||||
"Function": HotReload(总结word文档)
|
||||
},
|
||||
"解析整个C++项目头文件": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个C++项目的所有头文件(.h/.hpp) | 输入参数为路径",
|
||||
"Function": HotReload(解析一个C项目的头文件)
|
||||
},
|
||||
"解析整个C++项目(.cpp/.hpp/.c/.h)": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个C++项目的所有源文件(.cpp/.hpp/.c/.h)| 输入参数为路径",
|
||||
"Function": HotReload(解析一个C项目)
|
||||
},
|
||||
"解析整个Go项目": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个Go项目的所有源文件 | 输入参数为路径",
|
||||
"Function": HotReload(解析一个Golang项目)
|
||||
},
|
||||
"解析整个Rust项目": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个Rust项目的所有源文件 | 输入参数为路径",
|
||||
"Function": HotReload(解析一个Rust项目)
|
||||
},
|
||||
"解析整个Java项目": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个Java项目的所有源文件 | 输入参数为路径",
|
||||
"Function": HotReload(解析一个Java项目)
|
||||
},
|
||||
"解析整个前端项目(js,ts,css等)": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个前端项目的所有源文件(js,ts,css等) | 输入参数为路径",
|
||||
"Function": HotReload(解析一个前端项目)
|
||||
},
|
||||
"解析整个Lua项目": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个Lua项目的所有源文件 | 输入参数为路径",
|
||||
"Function": HotReload(解析一个Lua项目)
|
||||
},
|
||||
"解析整个CSharp项目": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "解析一个CSharp项目的所有源文件 | 输入参数为路径",
|
||||
"Function": HotReload(解析一个CSharp项目)
|
||||
},
|
||||
"解析Jupyter Notebook文件": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Info": "解析Jupyter Notebook文件 | 输入参数为路径",
|
||||
"Function": HotReload(解析ipynb文件),
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "若输入0,则不解析notebook中的Markdown块", # 高级参数输入区的显示提示
|
||||
},
|
||||
"读Tex论文写摘要": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Info": "读取Tex论文并写摘要 | 输入参数为路径",
|
||||
"Function": HotReload(读文章写摘要)
|
||||
},
|
||||
"翻译README或MD": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Info": "将Markdown翻译为中文 | 输入参数为路径或URL",
|
||||
"Function": HotReload(Markdown英译中)
|
||||
},
|
||||
"翻译Markdown或README(支持Github链接)": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Info": "将Markdown或README翻译为中文 | 输入参数为路径或URL",
|
||||
"Function": HotReload(Markdown英译中)
|
||||
},
|
||||
"批量生成函数注释": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "批量生成函数的注释 | 输入参数为路径",
|
||||
"Function": HotReload(批量生成函数注释)
|
||||
},
|
||||
"保存当前的对话": {
|
||||
"Group": "对话",
|
||||
"AsButton": True,
|
||||
"Info": "保存当前的对话 | 不需要输入参数",
|
||||
"Function": HotReload(对话历史存档)
|
||||
},
|
||||
"[多线程Demo]解析此项目本身(源码自译解)": {
|
||||
"Group": "对话|编程",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "多线程解析并翻译此项目的源码 | 不需要输入参数",
|
||||
"Function": HotReload(解析项目本身)
|
||||
},
|
||||
"[插件demo]历史上的今天": {
|
||||
"Group": "对话",
|
||||
"AsButton": True,
|
||||
"Info": "查看历史上的今天事件 | 不需要输入参数",
|
||||
"Function": HotReload(高阶功能模板函数)
|
||||
},
|
||||
"精准翻译PDF论文": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Info": "精准翻译PDF论文为中文 | 输入参数为路径",
|
||||
"Function": HotReload(批量翻译PDF文档)
|
||||
},
|
||||
"询问多个GPT模型": {
|
||||
"Color": "stop", # 按钮颜色
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Function": HotReload(同时问询)
|
||||
},
|
||||
"[测试功能] 批量总结PDF文档": {
|
||||
"批量总结PDF文档": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Info": "批量总结PDF文档的内容 | 输入参数为路径",
|
||||
"Function": HotReload(批量总结PDF文档)
|
||||
},
|
||||
"[测试功能] 批量总结PDF文档pdfminer": {
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(批量总结PDF文档pdfminer)
|
||||
},
|
||||
"谷歌学术检索助手(输入谷歌学术搜索页url)": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "使用谷歌学术检索助手搜索指定URL的结果 | 输入参数为谷歌学术搜索页的URL",
|
||||
"Function": HotReload(谷歌检索小助手)
|
||||
},
|
||||
|
||||
"理解PDF文档内容 (模仿ChatPDF)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "理解PDF文档的内容并进行回答 | 输入参数为路径",
|
||||
"Function": HotReload(理解PDF文档内容标准文件输入)
|
||||
},
|
||||
"[测试功能] 英文Latex项目全文润色(输入路径或上传压缩包)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"英文Latex项目全文润色(输入路径或上传压缩包)": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Latex英文润色)
|
||||
},
|
||||
"[测试功能] 中文Latex项目全文润色(输入路径或上传压缩包)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"英文Latex项目全文纠错(输入路径或上传压缩包)": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Latex英文纠错)
|
||||
},
|
||||
"中文Latex项目全文润色(输入路径或上传压缩包)": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "对中文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Latex中文润色)
|
||||
},
|
||||
"[测试功能] Latex项目全文中译英(输入路径或上传压缩包)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Latex项目全文中译英(输入路径或上传压缩包)": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "对Latex项目全文进行中译英处理 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Latex中译英)
|
||||
},
|
||||
"[测试功能] Latex项目全文英译中(输入路径或上传压缩包)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Latex项目全文英译中(输入路径或上传压缩包)": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "对Latex项目全文进行英译中处理 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Latex英译中)
|
||||
},
|
||||
"[测试功能] 批量Markdown中译英(输入路径或上传压缩包)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"批量Markdown中译英(输入路径或上传压缩包)": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包",
|
||||
"Function": HotReload(Markdown中译英)
|
||||
},
|
||||
"[测试功能] 批量Markdown英译中(输入路径或上传压缩包)": {
|
||||
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(Markdown英译中)
|
||||
},
|
||||
}
|
||||
|
||||
})
|
||||
# -=--=- 尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=-
|
||||
try:
|
||||
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
|
||||
function_plugins.update({
|
||||
"一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
# "Info": "下载arxiv论文并翻译摘要 | 输入参数为arxiv编号如1812.10695",
|
||||
"Function": HotReload(下载arxiv论文并翻译摘要)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
###################### 第三组插件 ###########################
|
||||
# [第三组插件]: 尚未充分测试的函数插件,放在这里
|
||||
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
|
||||
function_plugins.update({
|
||||
"一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": {
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(下载arxiv论文并翻译摘要)
|
||||
}
|
||||
})
|
||||
try:
|
||||
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
||||
function_plugins.update({
|
||||
"连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
# "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
|
||||
"Function": HotReload(连接网络回答问题)
|
||||
}
|
||||
})
|
||||
from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
|
||||
function_plugins.update({
|
||||
"连接网络回答问题(中文Bing版,输入问题后点击该插件)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Info": "连接网络回答问题(需要访问中文Bing)| 输入参数是一个问题",
|
||||
"Function": HotReload(连接bing搜索回答问题)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
||||
function_plugins.update({
|
||||
"连接网络回答问题(先输入问题,再点击按钮,需要访问谷歌)": {
|
||||
"Color": "stop",
|
||||
"AsButton": False, # 加入下拉菜单中
|
||||
"Function": HotReload(连接网络回答问题)
|
||||
}
|
||||
})
|
||||
try:
|
||||
from crazy_functions.解析项目源代码 import 解析任意code项目
|
||||
function_plugins.update({
|
||||
"解析项目源代码(手动指定和筛选源代码文件类型)": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示
|
||||
"Function": HotReload(解析任意code项目)
|
||||
},
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.询问多个大语言模型 import 同时问询_指定模型
|
||||
function_plugins.update({
|
||||
"询问多个GPT模型(手动指定询问哪些模型)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示
|
||||
"Function": HotReload(同时问询_指定模型)
|
||||
},
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.图片生成 import 图片生成
|
||||
function_plugins.update({
|
||||
"图片生成(先切换模型到openai或api2d)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
||||
"ArgsReminder": "在这里输入分辨率, 如256x256(默认)", # 高级参数输入区的显示提示
|
||||
"Info": "图片生成 | 输入参数字符串,提供图像的内容",
|
||||
"Function": HotReload(图片生成)
|
||||
},
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.总结音视频 import 总结音视频
|
||||
function_plugins.update({
|
||||
"批量总结音视频(输入路径或上传压缩包)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder": "调用openai api 使用whisper-1模型, 目前支持的格式:mp4, m4a, wav, mpga, mpeg, mp3。此处可以输入解析提示,例如:解析为简体中文(默认)。",
|
||||
"Info": "批量总结音频或视频 | 输入参数为路径",
|
||||
"Function": HotReload(总结音视频)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.数学动画生成manim import 动画生成
|
||||
function_plugins.update({
|
||||
"数学动画生成(Manim)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Info": "按照自然语言描述生成一个动画 | 输入参数是一段话",
|
||||
"Function": HotReload(动画生成)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
|
||||
function_plugins.update({
|
||||
"Markdown翻译(手动指定语言)": {
|
||||
"Group": "编程",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder": "请输入要翻译成哪种语言,默认为Chinese。",
|
||||
"Function": HotReload(Markdown翻译指定语言)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.Langchain知识库 import 知识库问答
|
||||
function_plugins.update({
|
||||
"构建知识库(请先上传文件素材)": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder": "待注入的知识库名称id, 默认为default",
|
||||
"Function": HotReload(知识库问答)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.Langchain知识库 import 读取知识库作答
|
||||
function_plugins.update({
|
||||
"知识库问答": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder": "待提取的知识库名称id, 默认为default, 您需要首先调用构建知识库",
|
||||
"Function": HotReload(读取知识库作答)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.交互功能函数模板 import 交互功能模板函数
|
||||
function_plugins.update({
|
||||
"交互功能模板函数": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"Function": HotReload(交互功能模板函数)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
|
||||
function_plugins.update({
|
||||
"Latex英文纠错+高亮修正位置 [需Latex]": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder": "如果有必要, 请在此处追加更细致的矫错指令(使用英文)。",
|
||||
"Function": HotReload(Latex英文纠错加PDF对比)
|
||||
}
|
||||
})
|
||||
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
|
||||
function_plugins.update({
|
||||
"Arixv论文精细翻译(输入arxivID)[需Latex]": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder":
|
||||
"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " +
|
||||
"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " +
|
||||
'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
|
||||
"Function": HotReload(Latex翻译中文并重新编译PDF)
|
||||
}
|
||||
})
|
||||
function_plugins.update({
|
||||
"本地Latex论文精细翻译(上传Latex项目)[需Latex]": {
|
||||
"Group": "学术",
|
||||
"Color": "stop",
|
||||
"AsButton": False,
|
||||
"AdvancedArgs": True,
|
||||
"ArgsReminder":
|
||||
"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " +
|
||||
"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " +
|
||||
'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
|
||||
"Function": HotReload(Latex翻译中文并重新编译PDF)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
try:
|
||||
from toolbox import get_conf
|
||||
ENABLE_AUDIO, = get_conf('ENABLE_AUDIO')
|
||||
if ENABLE_AUDIO:
|
||||
from crazy_functions.语音助手 import 语音助手
|
||||
function_plugins.update({
|
||||
"实时音频采集": {
|
||||
"Group": "对话",
|
||||
"Color": "stop",
|
||||
"AsButton": True,
|
||||
"Info": "开始语言对话 | 没有输入参数",
|
||||
"Function": HotReload(语音助手)
|
||||
}
|
||||
})
|
||||
except:
|
||||
print('Load function plugin failed')
|
||||
|
||||
|
||||
# try:
|
||||
# from crazy_functions.chatglm微调工具 import 微调数据集生成
|
||||
# function_plugins.update({
|
||||
# "黑盒模型学习: 微调数据集生成 (先上传数据集)": {
|
||||
# "Color": "stop",
|
||||
# "AsButton": False,
|
||||
# "AdvancedArgs": True,
|
||||
# "ArgsReminder": "针对数据集输入(如 绿帽子*深蓝色衬衫*黑色运动裤)给出指令,例如您可以将以下命令复制到下方: --llm_to_learn=azure-gpt-3.5 --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、过去经历进行描写。要求:100字以内,用第二人称。' --system_prompt=''",
|
||||
# "Function": HotReload(微调数据集生成)
|
||||
# }
|
||||
# })
|
||||
# except:
|
||||
# print('Load function plugin failed')
|
||||
|
||||
|
||||
|
||||
"""
|
||||
设置默认值:
|
||||
- 默认 Group = 对话
|
||||
- 默认 AsButton = True
|
||||
- 默认 AdvancedArgs = False
|
||||
- 默认 Color = secondary
|
||||
"""
|
||||
for name, function_meta in function_plugins.items():
|
||||
if "Group" not in function_meta:
|
||||
function_plugins[name]["Group"] = '对话'
|
||||
if "AsButton" not in function_meta:
|
||||
function_plugins[name]["AsButton"] = True
|
||||
if "AdvancedArgs" not in function_meta:
|
||||
function_plugins[name]["AdvancedArgs"] = False
|
||||
if "Color" not in function_meta:
|
||||
function_plugins[name]["Color"] = 'secondary'
|
||||
|
||||
###################### 第n组插件 ###########################
|
||||
return function_plugins
|
||||
|
||||
107
crazy_functions/Langchain知识库.py
Normal file
107
crazy_functions/Langchain知识库.py
Normal file
@@ -0,0 +1,107 @@
|
||||
from toolbox import CatchException, update_ui, ProxyNetworkActivate
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, get_files_from_everything
|
||||
|
||||
|
||||
|
||||
@CatchException
|
||||
def 知识库问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append(("这是什么功能?", "[Local Message] 从一批文件(txt, md, tex)中读取数据构建知识库, 然后进行问答。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# resolve deps
|
||||
try:
|
||||
from zh_langchain import construct_vector_store
|
||||
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
||||
from .crazy_utils import knowledge_archive_interface
|
||||
except Exception as e:
|
||||
chatbot.append(
|
||||
["依赖不足",
|
||||
"导入依赖失败。正在尝试自动安装,请查看终端的输出或耐心等待..."]
|
||||
)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
from .crazy_utils import try_install_deps
|
||||
try_install_deps(['zh_langchain==0.2.1', 'pypinyin'])
|
||||
|
||||
# < --------------------读取参数--------------- >
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
kai_id = plugin_kwargs.get("advanced_arg", 'default')
|
||||
|
||||
# < --------------------读取文件--------------- >
|
||||
file_manifest = []
|
||||
spl = ["txt", "doc", "docx", "email", "epub", "html", "json", "md", "msg", "pdf", "ppt", "pptx", "rtf"]
|
||||
for sp in spl:
|
||||
_, file_manifest_tmp, _ = get_files_from_everything(txt, type=f'.{sp}')
|
||||
file_manifest += file_manifest_tmp
|
||||
|
||||
if len(file_manifest) == 0:
|
||||
chatbot.append(["没有找到任何可读取文件", "当前支持的格式包括: txt, md, docx, pptx, pdf, json等"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# < -------------------预热文本向量化模组--------------- >
|
||||
chatbot.append(['<br/>'.join(file_manifest), "正在预热文本向量化模组, 如果是第一次运行, 将消耗较长时间下载中文向量化模型..."])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
print('Checking Text2vec ...')
|
||||
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
||||
with ProxyNetworkActivate(): # 临时地激活代理网络
|
||||
HuggingFaceEmbeddings(model_name="GanymedeNil/text2vec-large-chinese")
|
||||
|
||||
# < -------------------构建知识库--------------- >
|
||||
chatbot.append(['<br/>'.join(file_manifest), "正在构建知识库..."])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
print('Establishing knowledge archive ...')
|
||||
with ProxyNetworkActivate(): # 临时地激活代理网络
|
||||
kai = knowledge_archive_interface()
|
||||
kai.feed_archive(file_manifest=file_manifest, id=kai_id)
|
||||
kai_files = kai.get_loaded_file()
|
||||
kai_files = '<br/>'.join(kai_files)
|
||||
# chatbot.append(['知识库构建成功', "正在将知识库存储至cookie中"])
|
||||
# yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
# chatbot._cookies['langchain_plugin_embedding'] = kai.get_current_archive_id()
|
||||
# chatbot._cookies['lock_plugin'] = 'crazy_functions.Langchain知识库->读取知识库作答'
|
||||
# chatbot.append(['完成', "“根据知识库作答”函数插件已经接管问答系统, 提问吧! 但注意, 您接下来不能再使用其他插件了,刷新页面即可以退出知识库问答模式。"])
|
||||
chatbot.append(['构建完成', f"当前知识库内的有效文件:\n\n---\n\n{kai_files}\n\n---\n\n请切换至“知识库问答”插件进行知识库访问, 或者使用此插件继续上传更多文件。"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
|
||||
@CatchException
|
||||
def 读取知识库作答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port=-1):
|
||||
# resolve deps
|
||||
try:
|
||||
from zh_langchain import construct_vector_store
|
||||
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
||||
from .crazy_utils import knowledge_archive_interface
|
||||
except Exception as e:
|
||||
chatbot.append(["依赖不足", "导入依赖失败。正在尝试自动安装,请查看终端的输出或耐心等待..."])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
from .crazy_utils import try_install_deps
|
||||
try_install_deps(['zh_langchain==0.2.1'])
|
||||
|
||||
# < ------------------- --------------- >
|
||||
kai = knowledge_archive_interface()
|
||||
|
||||
if 'langchain_plugin_embedding' in chatbot._cookies:
|
||||
resp, prompt = kai.answer_with_archive_by_id(txt, chatbot._cookies['langchain_plugin_embedding'])
|
||||
else:
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
kai_id = plugin_kwargs.get("advanced_arg", 'default')
|
||||
resp, prompt = kai.answer_with_archive_by_id(txt, kai_id)
|
||||
|
||||
chatbot.append((txt, '[Local Message] ' + prompt))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=prompt, inputs_show_user=txt,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||
sys_prompt=system_prompt
|
||||
)
|
||||
history.extend((prompt, gpt_say))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
@@ -1,6 +1,6 @@
|
||||
from toolbox import update_ui
|
||||
from toolbox import CatchException, report_execption, write_results_to_file
|
||||
fast_debug = False
|
||||
from toolbox import update_ui, trimmed_format_exc
|
||||
from toolbox import CatchException, report_execption, write_results_to_file, zip_folder
|
||||
|
||||
|
||||
class PaperFileGroup():
|
||||
def __init__(self):
|
||||
@@ -34,8 +34,27 @@ class PaperFileGroup():
|
||||
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
|
||||
|
||||
print('Segmentation: done')
|
||||
def merge_result(self):
|
||||
self.file_result = ["" for _ in range(len(self.file_paths))]
|
||||
for r, k in zip(self.sp_file_result, self.sp_file_index):
|
||||
self.file_result[k] += r
|
||||
|
||||
def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
|
||||
def write_result(self):
|
||||
manifest = []
|
||||
for path, res in zip(self.file_paths, self.file_result):
|
||||
with open(path + '.polish.tex', 'w', encoding='utf8') as f:
|
||||
manifest.append(path + '.polish.tex')
|
||||
f.write(res)
|
||||
return manifest
|
||||
|
||||
def zip_result(self):
|
||||
import os, time
|
||||
folder = os.path.dirname(self.file_paths[0])
|
||||
t = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
|
||||
zip_folder(folder, './gpt_log/', f'{t}-polished.zip')
|
||||
|
||||
|
||||
def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en', mode='polish'):
|
||||
import time, os, re
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
|
||||
@@ -47,7 +66,7 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
|
||||
file_content = f.read()
|
||||
# 定义注释的正则表达式
|
||||
comment_pattern = r'%.*'
|
||||
comment_pattern = r'(?<!\\)%.*'
|
||||
# 使用正则表达式查找注释,并替换为空字符串
|
||||
clean_tex_content = re.sub(comment_pattern, '', file_content)
|
||||
# 记录删除注释后的文本
|
||||
@@ -58,28 +77,27 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
pfg.run_file_split(max_token_limit=1024)
|
||||
n_split = len(pfg.sp_file_contents)
|
||||
|
||||
# <-------- 抽取摘要 ---------->
|
||||
# if language == 'en':
|
||||
# abs_extract_inputs = f"Please write an abstract for this paper"
|
||||
|
||||
# # 单线,获取文章meta信息
|
||||
# paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
# inputs=abs_extract_inputs,
|
||||
# inputs_show_user=f"正在抽取摘要信息。",
|
||||
# llm_kwargs=llm_kwargs,
|
||||
# chatbot=chatbot, history=[],
|
||||
# sys_prompt="Your job is to collect information from materials。",
|
||||
# )
|
||||
|
||||
# <-------- 多线程润色开始 ---------->
|
||||
if language == 'en':
|
||||
inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" +
|
||||
if mode == 'polish':
|
||||
inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, " +
|
||||
"improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
else:
|
||||
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
|
||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
|
||||
r"Answer me only with the revised text:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag]
|
||||
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
|
||||
elif language == 'zh':
|
||||
inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
if mode == 'polish':
|
||||
inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
else:
|
||||
inputs_array = [f"以下是一篇学术论文中的一段内容,请对这部分内容进行语法矫正。不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag]
|
||||
sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)]
|
||||
|
||||
@@ -95,6 +113,17 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
scroller_max_len = 80
|
||||
)
|
||||
|
||||
# <-------- 文本碎片重组为完整的tex文件,整理结果为压缩包 ---------->
|
||||
try:
|
||||
pfg.sp_file_result = []
|
||||
for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]):
|
||||
pfg.sp_file_result.append(gpt_say)
|
||||
pfg.merge_result()
|
||||
pfg.write_result()
|
||||
pfg.zip_result()
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
|
||||
# <-------- 整理结果,退出 ---------->
|
||||
create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
|
||||
res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name)
|
||||
@@ -173,3 +202,42 @@ def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh')
|
||||
|
||||
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"对整个Latex项目进行纠错。函数插件贡献者: Binary-Husky"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import tiktoken
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en', mode='proofread')
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -46,7 +46,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
|
||||
file_content = f.read()
|
||||
# 定义注释的正则表达式
|
||||
comment_pattern = r'%.*'
|
||||
comment_pattern = r'(?<!\\)%.*'
|
||||
# 使用正则表达式查找注释,并替换为空字符串
|
||||
clean_tex_content = re.sub(comment_pattern, '', file_content)
|
||||
# 记录删除注释后的文本
|
||||
|
||||
300
crazy_functions/Latex输出PDF结果.py
Normal file
300
crazy_functions/Latex输出PDF结果.py
Normal file
@@ -0,0 +1,300 @@
|
||||
from toolbox import update_ui, trimmed_format_exc, get_conf, objdump, objload, promote_file_to_downloadzone
|
||||
from toolbox import CatchException, report_execption, update_ui_lastest_msg, zip_result, gen_time_str
|
||||
from functools import partial
|
||||
import glob, os, requests, time
|
||||
pj = os.path.join
|
||||
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
|
||||
|
||||
# =================================== 工具函数 ===============================================
|
||||
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
||||
def switch_prompt(pfg, mode, more_requirement):
|
||||
"""
|
||||
Generate prompts and system prompts based on the mode for proofreading or translating.
|
||||
Args:
|
||||
- pfg: Proofreader or Translator instance.
|
||||
- mode: A string specifying the mode, either 'proofread' or 'translate_zh'.
|
||||
|
||||
Returns:
|
||||
- inputs_array: A list of strings containing prompts for users to respond to.
|
||||
- sys_prompt_array: A list of strings containing prompts for system prompts.
|
||||
"""
|
||||
n_split = len(pfg.sp_file_contents)
|
||||
if mode == 'proofread_en':
|
||||
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
|
||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + more_requirement +
|
||||
r"Answer me only with the revised text:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
|
||||
elif mode == 'translate_zh':
|
||||
inputs_array = [r"Below is a section from an English academic paper, translate it into Chinese. " + more_requirement +
|
||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
|
||||
r"Answer me only with the translated text:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
sys_prompt_array = ["You are a professional translator." for _ in range(n_split)]
|
||||
else:
|
||||
assert False, "未知指令"
|
||||
return inputs_array, sys_prompt_array
|
||||
|
||||
def desend_to_extracted_folder_if_exist(project_folder):
|
||||
"""
|
||||
Descend into the extracted folder if it exists, otherwise return the original folder.
|
||||
|
||||
Args:
|
||||
- project_folder: A string specifying the folder path.
|
||||
|
||||
Returns:
|
||||
- A string specifying the path to the extracted folder, or the original folder if there is no extracted folder.
|
||||
"""
|
||||
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
|
||||
if len(maybe_dir) == 0: return project_folder
|
||||
if maybe_dir[0].endswith('.extract'): return maybe_dir[0]
|
||||
return project_folder
|
||||
|
||||
def move_project(project_folder, arxiv_id=None):
|
||||
"""
|
||||
Create a new work folder and copy the project folder to it.
|
||||
|
||||
Args:
|
||||
- project_folder: A string specifying the folder path of the project.
|
||||
|
||||
Returns:
|
||||
- A string specifying the path to the new work folder.
|
||||
"""
|
||||
import shutil, time
|
||||
time.sleep(2) # avoid time string conflict
|
||||
if arxiv_id is not None:
|
||||
new_workfolder = pj(ARXIV_CACHE_DIR, arxiv_id, 'workfolder')
|
||||
else:
|
||||
new_workfolder = f'gpt_log/{gen_time_str()}'
|
||||
try:
|
||||
shutil.rmtree(new_workfolder)
|
||||
except:
|
||||
pass
|
||||
|
||||
# align subfolder if there is a folder wrapper
|
||||
items = glob.glob(pj(project_folder,'*'))
|
||||
if len(glob.glob(pj(project_folder,'*.tex'))) == 0 and len(items) == 1:
|
||||
if os.path.isdir(items[0]): project_folder = items[0]
|
||||
|
||||
shutil.copytree(src=project_folder, dst=new_workfolder)
|
||||
return new_workfolder
|
||||
|
||||
def arxiv_download(chatbot, history, txt):
|
||||
def check_cached_translation_pdf(arxiv_id):
|
||||
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'translation')
|
||||
if not os.path.exists(translation_dir):
|
||||
os.makedirs(translation_dir)
|
||||
target_file = pj(translation_dir, 'translate_zh.pdf')
|
||||
if os.path.exists(target_file):
|
||||
promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
|
||||
return target_file
|
||||
return False
|
||||
def is_float(s):
|
||||
try:
|
||||
float(s)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
if ('.' in txt) and ('/' not in txt) and is_float(txt): # is arxiv ID
|
||||
txt = 'https://arxiv.org/abs/' + txt.strip()
|
||||
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
|
||||
txt = 'https://arxiv.org/abs/' + txt[:10]
|
||||
if not txt.startswith('https://arxiv.org'):
|
||||
return txt, None
|
||||
|
||||
# <-------------- inspect format ------------->
|
||||
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
time.sleep(1) # 刷新界面
|
||||
|
||||
url_ = txt # https://arxiv.org/abs/1707.06690
|
||||
if not txt.startswith('https://arxiv.org/abs/'):
|
||||
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}"
|
||||
yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
|
||||
return msg, None
|
||||
# <-------------- set format ------------->
|
||||
arxiv_id = url_.split('/abs/')[-1]
|
||||
if 'v' in arxiv_id: arxiv_id = arxiv_id[:10]
|
||||
cached_translation_pdf = check_cached_translation_pdf(arxiv_id)
|
||||
if cached_translation_pdf: return cached_translation_pdf, arxiv_id
|
||||
|
||||
url_tar = url_.replace('/abs/', '/e-print/')
|
||||
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
|
||||
extract_dst = pj(ARXIV_CACHE_DIR, arxiv_id, 'extract')
|
||||
os.makedirs(translation_dir, exist_ok=True)
|
||||
|
||||
# <-------------- download arxiv source file ------------->
|
||||
dst = pj(translation_dir, arxiv_id+'.tar')
|
||||
if os.path.exists(dst):
|
||||
yield from update_ui_lastest_msg("调用缓存", chatbot=chatbot, history=history) # 刷新界面
|
||||
else:
|
||||
yield from update_ui_lastest_msg("开始下载", chatbot=chatbot, history=history) # 刷新界面
|
||||
proxies, = get_conf('proxies')
|
||||
r = requests.get(url_tar, proxies=proxies)
|
||||
with open(dst, 'wb+') as f:
|
||||
f.write(r.content)
|
||||
# <-------------- extract file ------------->
|
||||
yield from update_ui_lastest_msg("下载完成", chatbot=chatbot, history=history) # 刷新界面
|
||||
from toolbox import extract_archive
|
||||
extract_archive(file_path=dst, dest_dir=extract_dst)
|
||||
return extract_dst, arxiv_id
|
||||
# ========================================= 插件主程序1 =====================================================
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
# <-------------- information about this plugin ------------->
|
||||
chatbot.append([ "函数插件功能?",
|
||||
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# <-------------- more requirements ------------->
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||
|
||||
# <-------------- check deps ------------->
|
||||
try:
|
||||
import glob, os, time, subprocess
|
||||
subprocess.Popen(['pdflatex', '-version'])
|
||||
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
||||
except Exception as e:
|
||||
chatbot.append([ f"解析项目: {txt}",
|
||||
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
# <-------------- clear history and read input ------------->
|
||||
history = []
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
# <-------------- if is a zip/tar file ------------->
|
||||
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
||||
|
||||
|
||||
# <-------------- move latex project away from temp folder ------------->
|
||||
project_folder = move_project(project_folder, arxiv_id=None)
|
||||
|
||||
|
||||
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||
if not os.path.exists(project_folder + '/merge_proofread_en.tex'):
|
||||
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
chatbot, history, system_prompt, mode='proofread_en', switch_prompt=_switch_prompt_)
|
||||
|
||||
|
||||
# <-------------- compile PDF ------------->
|
||||
success = yield from 编译Latex(chatbot, history, main_file_original='merge', main_file_modified='merge_proofread_en',
|
||||
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder=project_folder)
|
||||
|
||||
|
||||
# <-------------- zip PDF ------------->
|
||||
zip_res = zip_result(project_folder)
|
||||
if success:
|
||||
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
else:
|
||||
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
|
||||
# <-------------- we are done ------------->
|
||||
return success
|
||||
|
||||
|
||||
# ========================================= 插件主程序2 =====================================================
|
||||
|
||||
@CatchException
|
||||
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
# <-------------- information about this plugin ------------->
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# <-------------- more requirements ------------->
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||
|
||||
# <-------------- check deps ------------->
|
||||
try:
|
||||
import glob, os, time, subprocess
|
||||
subprocess.Popen(['pdflatex', '-version'])
|
||||
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
||||
except Exception as e:
|
||||
chatbot.append([ f"解析项目: {txt}",
|
||||
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
# <-------------- clear history and read input ------------->
|
||||
history = []
|
||||
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt)
|
||||
if txt.endswith('.pdf'):
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"发现已经存在翻译好的PDF文档")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
# <-------------- if is a zip/tar file ------------->
|
||||
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
||||
|
||||
|
||||
# <-------------- move latex project away from temp folder ------------->
|
||||
project_folder = move_project(project_folder, arxiv_id)
|
||||
|
||||
|
||||
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
|
||||
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
chatbot, history, system_prompt, mode='translate_zh', switch_prompt=_switch_prompt_)
|
||||
|
||||
|
||||
# <-------------- compile PDF ------------->
|
||||
success = yield from 编译Latex(chatbot, history, main_file_original='merge', main_file_modified='merge_translate_zh', mode='translate_zh',
|
||||
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder=project_folder)
|
||||
|
||||
# <-------------- zip PDF ------------->
|
||||
zip_res = zip_result(project_folder)
|
||||
if success:
|
||||
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
else:
|
||||
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体(见Github wiki) ...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
|
||||
|
||||
# <-------------- we are done ------------->
|
||||
return success
|
||||
141
crazy_functions/chatglm微调工具.py
Normal file
141
crazy_functions/chatglm微调工具.py
Normal file
@@ -0,0 +1,141 @@
|
||||
from toolbox import CatchException, update_ui, promote_file_to_downloadzone
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
import datetime, json
|
||||
|
||||
def fetch_items(list_of_items, batch_size):
|
||||
for i in range(0, len(list_of_items), batch_size):
|
||||
yield list_of_items[i:i + batch_size]
|
||||
|
||||
def string_to_options(arguments):
|
||||
import argparse
|
||||
import shlex
|
||||
|
||||
# Create an argparse.ArgumentParser instance
|
||||
parser = argparse.ArgumentParser()
|
||||
|
||||
# Add command-line arguments
|
||||
parser.add_argument("--llm_to_learn", type=str, help="LLM model to learn", default="gpt-3.5-turbo")
|
||||
parser.add_argument("--prompt_prefix", type=str, help="Prompt prefix", default='')
|
||||
parser.add_argument("--system_prompt", type=str, help="System prompt", default='')
|
||||
parser.add_argument("--batch", type=int, help="System prompt", default=50)
|
||||
parser.add_argument("--pre_seq_len", type=int, help="pre_seq_len", default=50)
|
||||
parser.add_argument("--learning_rate", type=float, help="learning_rate", default=2e-2)
|
||||
parser.add_argument("--num_gpus", type=int, help="num_gpus", default=1)
|
||||
parser.add_argument("--json_dataset", type=str, help="json_dataset", default="")
|
||||
parser.add_argument("--ptuning_directory", type=str, help="ptuning_directory", default="")
|
||||
|
||||
|
||||
|
||||
# Parse the arguments
|
||||
args = parser.parse_args(shlex.split(arguments))
|
||||
|
||||
return args
|
||||
|
||||
@CatchException
|
||||
def 微调数据集生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
args = plugin_kwargs.get("advanced_arg", None)
|
||||
if args is None:
|
||||
chatbot.append(("没给定指令", "退出"))
|
||||
yield from update_ui(chatbot=chatbot, history=history); return
|
||||
else:
|
||||
arguments = string_to_options(arguments=args)
|
||||
|
||||
dat = []
|
||||
with open(txt, 'r', encoding='utf8') as f:
|
||||
for line in f.readlines():
|
||||
json_dat = json.loads(line)
|
||||
dat.append(json_dat["content"])
|
||||
|
||||
llm_kwargs['llm_model'] = arguments.llm_to_learn
|
||||
for batch in fetch_items(dat, arguments.batch):
|
||||
res = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array=[f"{arguments.prompt_prefix}\n\n{b}" for b in (batch)],
|
||||
inputs_show_user_array=[f"Show Nothing" for _ in (batch)],
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history_array=[[] for _ in (batch)],
|
||||
sys_prompt_array=[arguments.system_prompt for _ in (batch)],
|
||||
max_workers=10 # OpenAI所允许的最大并行过载
|
||||
)
|
||||
|
||||
with open(txt+'.generated.json', 'a+', encoding='utf8') as f:
|
||||
for b, r in zip(batch, res[1::2]):
|
||||
f.write(json.dumps({"content":b, "summary":r}, ensure_ascii=False)+'\n')
|
||||
|
||||
promote_file_to_downloadzone(txt+'.generated.json', rename_file='generated.json', chatbot=chatbot)
|
||||
return
|
||||
|
||||
|
||||
|
||||
@CatchException
|
||||
def 启动微调(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
import subprocess
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append(("这是什么功能?", "[Local Message] 微调数据集生成"))
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
args = plugin_kwargs.get("advanced_arg", None)
|
||||
if args is None:
|
||||
chatbot.append(("没给定指令", "退出"))
|
||||
yield from update_ui(chatbot=chatbot, history=history); return
|
||||
else:
|
||||
arguments = string_to_options(arguments=args)
|
||||
|
||||
|
||||
|
||||
pre_seq_len = arguments.pre_seq_len # 128
|
||||
learning_rate = arguments.learning_rate # 2e-2
|
||||
num_gpus = arguments.num_gpus # 1
|
||||
json_dataset = arguments.json_dataset # 't_code.json'
|
||||
ptuning_directory = arguments.ptuning_directory # '/home/hmp/ChatGLM2-6B/ptuning'
|
||||
|
||||
command = f"torchrun --standalone --nnodes=1 --nproc-per-node={num_gpus} main.py \
|
||||
--do_train \
|
||||
--train_file AdvertiseGen/{json_dataset} \
|
||||
--validation_file AdvertiseGen/{json_dataset} \
|
||||
--preprocessing_num_workers 20 \
|
||||
--prompt_column content \
|
||||
--response_column summary \
|
||||
--overwrite_cache \
|
||||
--model_name_or_path THUDM/chatglm2-6b \
|
||||
--output_dir output/clothgen-chatglm2-6b-pt-{pre_seq_len}-{learning_rate} \
|
||||
--overwrite_output_dir \
|
||||
--max_source_length 256 \
|
||||
--max_target_length 256 \
|
||||
--per_device_train_batch_size 1 \
|
||||
--per_device_eval_batch_size 1 \
|
||||
--gradient_accumulation_steps 16 \
|
||||
--predict_with_generate \
|
||||
--max_steps 100 \
|
||||
--logging_steps 10 \
|
||||
--save_steps 20 \
|
||||
--learning_rate {learning_rate} \
|
||||
--pre_seq_len {pre_seq_len} \
|
||||
--quantization_bit 4"
|
||||
|
||||
process = subprocess.Popen(command, shell=True, cwd=ptuning_directory)
|
||||
try:
|
||||
process.communicate(timeout=3600*24)
|
||||
except subprocess.TimeoutExpired:
|
||||
process.kill()
|
||||
return
|
||||
@@ -1,124 +0,0 @@
|
||||
"""
|
||||
这是什么?
|
||||
这个文件用于函数插件的单元测试
|
||||
运行方法 python crazy_functions/crazy_functions_test.py
|
||||
"""
|
||||
|
||||
def validate_path():
|
||||
import os, sys
|
||||
dir_name = os.path.dirname(__file__)
|
||||
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
|
||||
os.chdir(root_dir_assume)
|
||||
sys.path.append(root_dir_assume)
|
||||
|
||||
validate_path() # validate path so you can run from base directory
|
||||
from colorful import *
|
||||
from toolbox import get_conf, ChatBotWithCookies
|
||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
|
||||
get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
|
||||
|
||||
llm_kwargs = {
|
||||
'api_key': API_KEY,
|
||||
'llm_model': LLM_MODEL,
|
||||
'top_p':1.0,
|
||||
'max_length': None,
|
||||
'temperature':1.0,
|
||||
}
|
||||
plugin_kwargs = { }
|
||||
chatbot = ChatBotWithCookies(llm_kwargs)
|
||||
history = []
|
||||
system_prompt = "Serve me as a writing and programming assistant."
|
||||
web_port = 1024
|
||||
|
||||
|
||||
def test_解析一个Python项目():
|
||||
from crazy_functions.解析项目源代码 import 解析一个Python项目
|
||||
txt = "crazy_functions/test_project/python/dqn"
|
||||
for cookies, cb, hist, msg in 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_解析一个Cpp项目():
|
||||
from crazy_functions.解析项目源代码 import 解析一个C项目
|
||||
txt = "crazy_functions/test_project/cpp/cppipc"
|
||||
for cookies, cb, hist, msg in 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_Latex英文润色():
|
||||
from crazy_functions.Latex全文润色 import Latex英文润色
|
||||
txt = "crazy_functions/test_project/latex/attention"
|
||||
for cookies, cb, hist, msg in Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_Markdown中译英():
|
||||
from crazy_functions.批量Markdown翻译 import Markdown中译英
|
||||
txt = "README.md"
|
||||
for cookies, cb, hist, msg in Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_批量翻译PDF文档():
|
||||
from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档
|
||||
txt = "crazy_functions/test_project/pdf_and_word"
|
||||
for cookies, cb, hist, msg in 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_谷歌检索小助手():
|
||||
from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
|
||||
txt = "https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG="
|
||||
for cookies, cb, hist, msg in 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_总结word文档():
|
||||
from crazy_functions.总结word文档 import 总结word文档
|
||||
txt = "crazy_functions/test_project/pdf_and_word"
|
||||
for cookies, cb, hist, msg in 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_下载arxiv论文并翻译摘要():
|
||||
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
|
||||
txt = "1812.10695"
|
||||
for cookies, cb, hist, msg in 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_联网回答问题():
|
||||
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
||||
# txt = "“我们称之为高效”是什么梗?"
|
||||
# >> 从第0份、第1份、第2份搜索结果可以看出,“我们称之为高效”是指在游戏社区中,用户们用来形容一些游戏策略或行为非常高效且能够带来好的效果的用语。这个用语最初可能是在群星(Stellaris)这个游戏里面流行起来的,后来也传播到了其他游戏中,比如巨像(Titan)等游戏。其中第1份搜索结果中的一篇文章也指出,“我们称之为高效”这 一用语来源于群星(Stellaris)游戏中的一个情节。
|
||||
# txt = "为什么说枪毙P社玩家没有一个冤枉的?"
|
||||
# >> 它们都是关于一个知乎用户所发的帖子,引用了一群游戏玩家对于需要对P社玩家进行枪毙的讨论,这个话题的本质是玩家们对于P 社游戏中的政治与历史元素的不同看法,以及其中不少玩家以极端立场宣扬的想法和言论,因此有人就以枪毙这些玩家来回应此类言论。但是这个话题本身并没有实质内容,只是一个玩笑或者恶搞,并不应该被当做真实的态度或者观点,因此这种说法没有实际意义。
|
||||
# txt = "谁是应急食品?"
|
||||
# >> '根据以上搜索结果可以得知,应急食品是“原神”游戏中的角色派蒙的外号。'
|
||||
# txt = "道路千万条,安全第一条。后面两句是?"
|
||||
# >> '行车不规范,亲人两行泪。'
|
||||
# txt = "What is in the canister?"
|
||||
# >> Rainbow Six Siege 游戏中 Smoke 的 Canister 中装有何种物质相关的官方信息。
|
||||
# txt = "失败的man是什么?"
|
||||
# >> 根据第1份搜索结果,可以得知失败的man是指一位在B站购买了蜘蛛侠COS服后穿上后被网友嘲笑的UP主,而“失败的man”是蜘蛛侠英文名“spiderman”的谐音梗,并且网友们还 给这位UP主起了“苍蝇侠”的外号。因此,失败的man是指这位UP主在穿上蜘蛛侠COS服后被网友嘲笑的情况。
|
||||
# txt = "老六是什么,起源于哪里?"
|
||||
# >> 老六是网络流行语,最初起源于游戏《CSGO》,指游戏中玩家中独来独往、游离于队伍之外的“自由人”或玩得比较菜或者玩得比较阴险的人 ,后来逐渐演变成指玩得比较阴险的玩家。
|
||||
# txt = "罗小黑战记因为什么经常被吐槽?"
|
||||
# >> 3. 更新速度。罗小黑战记的更新时间不定,时而快时而慢,给观众留下了等待的时间过长的印象。
|
||||
# txt = "沙特、伊朗最近的关系如何?"
|
||||
# >> 最近在中国的斡旋下,沙特和伊朗于3月10日达成了恢复两国外交关系的协议,这表明两国关系已经重新回到正常化状态。
|
||||
# txt = "You should have gone for the head. What does that mean?"
|
||||
# >> The phrase "You should have gone for the head" is a quote from the Marvel movies, Avengers: Infinity War and Avengers: Endgame. It was spoken by the character Thanos in Infinity War and by Thor in Endgame.
|
||||
txt = "AutoGPT是什么?"
|
||||
# >> AutoGPT是一个基于GPT-4语言模型的开源应用程序。它可以根据用户需求自主执行任务,包括事件分析、营销方案撰写、代码编程、数学运算等等,并完全不需要用户插手。它可以自己思考,给出实现的步骤和实现细节,甚至可以自问自答执 行任务。最近它在GitHub上爆火,成为了业内最热门的项目之一。
|
||||
# txt = "钟离带什么圣遗物?"
|
||||
for cookies, cb, hist, msg in 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print("当前问答:", cb[-1][-1].replace("\n"," "))
|
||||
for i, it in enumerate(cb): print亮蓝(it[0]); print亮黄(it[1])
|
||||
|
||||
# test_解析一个Python项目()
|
||||
# test_Latex英文润色()
|
||||
# test_Markdown中译英()
|
||||
# test_批量翻译PDF文档()
|
||||
# test_谷歌检索小助手()
|
||||
# test_总结word文档()
|
||||
# test_下载arxiv论文并翻译摘要()
|
||||
# test_解析一个Cpp项目()
|
||||
|
||||
test_联网回答问题()
|
||||
|
||||
|
||||
input("程序完成,回车退出。")
|
||||
print("退出。")
|
||||
@@ -1,5 +1,5 @@
|
||||
import traceback
|
||||
from toolbox import update_ui, get_conf
|
||||
from toolbox import update_ui, get_conf, trimmed_format_exc
|
||||
import threading
|
||||
|
||||
def input_clipping(inputs, history, max_token_limit):
|
||||
import numpy as np
|
||||
@@ -94,12 +94,12 @@ def request_gpt_model_in_new_thread_with_ui_alive(
|
||||
continue # 返回重试
|
||||
else:
|
||||
# 【选择放弃】
|
||||
tb_str = '```\n' + traceback.format_exc() + '```'
|
||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
||||
return mutable[0] # 放弃
|
||||
except:
|
||||
# 【第三种情况】:其他错误:重试几次
|
||||
tb_str = '```\n' + traceback.format_exc() + '```'
|
||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||
print(tb_str)
|
||||
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
||||
if retry_op > 0:
|
||||
@@ -130,6 +130,11 @@ def request_gpt_model_in_new_thread_with_ui_alive(
|
||||
yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息
|
||||
return final_result
|
||||
|
||||
def can_multi_process(llm):
|
||||
if llm.startswith('gpt-'): return True
|
||||
if llm.startswith('api2d-'): return True
|
||||
if llm.startswith('azure-'): return True
|
||||
return False
|
||||
|
||||
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array, inputs_show_user_array, llm_kwargs,
|
||||
@@ -173,9 +178,9 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
if max_workers == -1: # 读取配置文件
|
||||
try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
|
||||
except: max_workers = 8
|
||||
if max_workers <= 0 or max_workers >= 20: max_workers = 8
|
||||
if max_workers <= 0: max_workers = 3
|
||||
# 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
|
||||
if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
|
||||
if not can_multi_process(llm_kwargs['llm_model']):
|
||||
max_workers = 1
|
||||
|
||||
executor = ThreadPoolExecutor(max_workers=max_workers)
|
||||
@@ -220,14 +225,14 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
continue # 返回重试
|
||||
else:
|
||||
# 【选择放弃】
|
||||
tb_str = '```\n' + traceback.format_exc() + '```'
|
||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
||||
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
|
||||
mutable[index][2] = "输入过长已放弃"
|
||||
return gpt_say # 放弃
|
||||
except:
|
||||
# 【第三种情况】:其他错误
|
||||
tb_str = '```\n' + traceback.format_exc() + '```'
|
||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||
print(tb_str)
|
||||
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
||||
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
|
||||
@@ -260,9 +265,6 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
time.sleep(refresh_interval)
|
||||
cnt += 1
|
||||
worker_done = [h.done() for h in futures]
|
||||
if all(worker_done):
|
||||
executor.shutdown()
|
||||
break
|
||||
# 更好的UI视觉效果
|
||||
observe_win = []
|
||||
# 每个线程都要“喂狗”(看门狗)
|
||||
@@ -281,6 +283,9 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
# 在前端打印些好玩的东西
|
||||
chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))]
|
||||
yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
|
||||
if all(worker_done):
|
||||
executor.shutdown()
|
||||
break
|
||||
|
||||
# 异步任务结束
|
||||
gpt_response_collection = []
|
||||
@@ -564,3 +569,185 @@ def read_and_clean_pdf_text(fp):
|
||||
# print亮绿('***************************')
|
||||
|
||||
return meta_txt, page_one_meta
|
||||
|
||||
|
||||
def get_files_from_everything(txt, type): # type='.md'
|
||||
"""
|
||||
这个函数是用来获取指定目录下所有指定类型(如.md)的文件,并且对于网络上的文件,也可以获取它。
|
||||
下面是对每个参数和返回值的说明:
|
||||
参数
|
||||
- txt: 路径或网址,表示要搜索的文件或者文件夹路径或网络上的文件。
|
||||
- type: 字符串,表示要搜索的文件类型。默认是.md。
|
||||
返回值
|
||||
- success: 布尔值,表示函数是否成功执行。
|
||||
- file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。
|
||||
- project_folder: 字符串,表示文件所在的文件夹路径。如果是网络上的文件,就是临时文件夹的路径。
|
||||
该函数详细注释已添加,请确认是否满足您的需要。
|
||||
"""
|
||||
import glob, os
|
||||
|
||||
success = True
|
||||
if txt.startswith('http'):
|
||||
# 网络的远程文件
|
||||
import requests
|
||||
from toolbox import get_conf
|
||||
proxies, = get_conf('proxies')
|
||||
r = requests.get(txt, proxies=proxies)
|
||||
with open('./gpt_log/temp'+type, 'wb+') as f: f.write(r.content)
|
||||
project_folder = './gpt_log/'
|
||||
file_manifest = ['./gpt_log/temp'+type]
|
||||
elif txt.endswith(type):
|
||||
# 直接给定文件
|
||||
file_manifest = [txt]
|
||||
project_folder = os.path.dirname(txt)
|
||||
elif os.path.exists(txt):
|
||||
# 本地路径,递归搜索
|
||||
project_folder = txt
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*'+type, recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
success = False
|
||||
else:
|
||||
project_folder = None
|
||||
file_manifest = []
|
||||
success = False
|
||||
|
||||
return success, file_manifest, project_folder
|
||||
|
||||
|
||||
|
||||
|
||||
def Singleton(cls):
|
||||
_instance = {}
|
||||
|
||||
def _singleton(*args, **kargs):
|
||||
if cls not in _instance:
|
||||
_instance[cls] = cls(*args, **kargs)
|
||||
return _instance[cls]
|
||||
|
||||
return _singleton
|
||||
|
||||
|
||||
@Singleton
|
||||
class knowledge_archive_interface():
|
||||
def __init__(self) -> None:
|
||||
self.threadLock = threading.Lock()
|
||||
self.current_id = ""
|
||||
self.kai_path = None
|
||||
self.qa_handle = None
|
||||
self.text2vec_large_chinese = None
|
||||
|
||||
def get_chinese_text2vec(self):
|
||||
if self.text2vec_large_chinese is None:
|
||||
# < -------------------预热文本向量化模组--------------- >
|
||||
from toolbox import ProxyNetworkActivate
|
||||
print('Checking Text2vec ...')
|
||||
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
||||
with ProxyNetworkActivate(): # 临时地激活代理网络
|
||||
self.text2vec_large_chinese = HuggingFaceEmbeddings(model_name="GanymedeNil/text2vec-large-chinese")
|
||||
|
||||
return self.text2vec_large_chinese
|
||||
|
||||
|
||||
def feed_archive(self, file_manifest, id="default"):
|
||||
self.threadLock.acquire()
|
||||
# import uuid
|
||||
self.current_id = id
|
||||
from zh_langchain import construct_vector_store
|
||||
self.qa_handle, self.kai_path = construct_vector_store(
|
||||
vs_id=self.current_id,
|
||||
files=file_manifest,
|
||||
sentence_size=100,
|
||||
history=[],
|
||||
one_conent="",
|
||||
one_content_segmentation="",
|
||||
text2vec = self.get_chinese_text2vec(),
|
||||
)
|
||||
self.threadLock.release()
|
||||
|
||||
def get_current_archive_id(self):
|
||||
return self.current_id
|
||||
|
||||
def get_loaded_file(self):
|
||||
return self.qa_handle.get_loaded_file()
|
||||
|
||||
def answer_with_archive_by_id(self, txt, id):
|
||||
self.threadLock.acquire()
|
||||
if not self.current_id == id:
|
||||
self.current_id = id
|
||||
from zh_langchain import construct_vector_store
|
||||
self.qa_handle, self.kai_path = construct_vector_store(
|
||||
vs_id=self.current_id,
|
||||
files=[],
|
||||
sentence_size=100,
|
||||
history=[],
|
||||
one_conent="",
|
||||
one_content_segmentation="",
|
||||
text2vec = self.get_chinese_text2vec(),
|
||||
)
|
||||
VECTOR_SEARCH_SCORE_THRESHOLD = 0
|
||||
VECTOR_SEARCH_TOP_K = 4
|
||||
CHUNK_SIZE = 512
|
||||
resp, prompt = self.qa_handle.get_knowledge_based_conent_test(
|
||||
query = txt,
|
||||
vs_path = self.kai_path,
|
||||
score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD,
|
||||
vector_search_top_k=VECTOR_SEARCH_TOP_K,
|
||||
chunk_conent=True,
|
||||
chunk_size=CHUNK_SIZE,
|
||||
text2vec = self.get_chinese_text2vec(),
|
||||
)
|
||||
self.threadLock.release()
|
||||
return resp, prompt
|
||||
|
||||
def try_install_deps(deps):
|
||||
for dep in deps:
|
||||
import subprocess, sys
|
||||
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '--user', dep])
|
||||
|
||||
|
||||
class construct_html():
|
||||
def __init__(self) -> None:
|
||||
self.css = """
|
||||
.row {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.column {
|
||||
flex: 1;
|
||||
padding: 10px;
|
||||
}
|
||||
|
||||
.table-header {
|
||||
font-weight: bold;
|
||||
border-bottom: 1px solid black;
|
||||
}
|
||||
|
||||
.table-row {
|
||||
border-bottom: 1px solid lightgray;
|
||||
}
|
||||
|
||||
.table-cell {
|
||||
padding: 5px;
|
||||
}
|
||||
"""
|
||||
self.html_string = f'<!DOCTYPE html><head><meta charset="utf-8"><title>翻译结果</title><style>{self.css}</style></head>'
|
||||
|
||||
|
||||
def add_row(self, a, b):
|
||||
tmp = """
|
||||
<div class="row table-row">
|
||||
<div class="column table-cell">REPLACE_A</div>
|
||||
<div class="column table-cell">REPLACE_B</div>
|
||||
</div>
|
||||
"""
|
||||
from toolbox import markdown_convertion
|
||||
tmp = tmp.replace('REPLACE_A', markdown_convertion(a))
|
||||
tmp = tmp.replace('REPLACE_B', markdown_convertion(b))
|
||||
self.html_string += tmp
|
||||
|
||||
|
||||
def save_file(self, file_name):
|
||||
with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
|
||||
f.write(self.html_string.encode('utf-8', 'ignore').decode())
|
||||
|
||||
|
||||
111
crazy_functions/json_fns/pydantic_io.py
Normal file
111
crazy_functions/json_fns/pydantic_io.py
Normal file
@@ -0,0 +1,111 @@
|
||||
"""
|
||||
https://github.com/langchain-ai/langchain/blob/master/docs/extras/modules/model_io/output_parsers/pydantic.ipynb
|
||||
|
||||
Example 1.
|
||||
|
||||
# Define your desired data structure.
|
||||
class Joke(BaseModel):
|
||||
setup: str = Field(description="question to set up a joke")
|
||||
punchline: str = Field(description="answer to resolve the joke")
|
||||
|
||||
# You can add custom validation logic easily with Pydantic.
|
||||
@validator("setup")
|
||||
def question_ends_with_question_mark(cls, field):
|
||||
if field[-1] != "?":
|
||||
raise ValueError("Badly formed question!")
|
||||
return field
|
||||
|
||||
|
||||
Example 2.
|
||||
|
||||
# Here's another example, but with a compound typed field.
|
||||
class Actor(BaseModel):
|
||||
name: str = Field(description="name of an actor")
|
||||
film_names: List[str] = Field(description="list of names of films they starred in")
|
||||
"""
|
||||
|
||||
import json, re, logging
|
||||
|
||||
|
||||
PYDANTIC_FORMAT_INSTRUCTIONS = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
|
||||
|
||||
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
|
||||
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
|
||||
|
||||
Here is the output schema:
|
||||
```
|
||||
{schema}
|
||||
```"""
|
||||
|
||||
|
||||
PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
|
||||
```
|
||||
{schema}
|
||||
```"""
|
||||
|
||||
class JsonStringError(Exception): ...
|
||||
|
||||
class GptJsonIO():
|
||||
|
||||
def __init__(self, schema, example_instruction=True):
|
||||
self.pydantic_object = schema
|
||||
self.example_instruction = example_instruction
|
||||
self.format_instructions = self.generate_format_instructions()
|
||||
|
||||
def generate_format_instructions(self):
|
||||
schema = self.pydantic_object.schema()
|
||||
|
||||
# Remove extraneous fields.
|
||||
reduced_schema = schema
|
||||
if "title" in reduced_schema:
|
||||
del reduced_schema["title"]
|
||||
if "type" in reduced_schema:
|
||||
del reduced_schema["type"]
|
||||
# Ensure json in context is well-formed with double quotes.
|
||||
if self.example_instruction:
|
||||
schema_str = json.dumps(reduced_schema)
|
||||
return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)
|
||||
else:
|
||||
return PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE.format(schema=schema_str)
|
||||
|
||||
def generate_output(self, text):
|
||||
# Greedy search for 1st json candidate.
|
||||
match = re.search(
|
||||
r"\{.*\}", text.strip(), re.MULTILINE | re.IGNORECASE | re.DOTALL
|
||||
)
|
||||
json_str = ""
|
||||
if match: json_str = match.group()
|
||||
json_object = json.loads(json_str, strict=False)
|
||||
final_object = self.pydantic_object.parse_obj(json_object)
|
||||
return final_object
|
||||
|
||||
def generate_repair_prompt(self, broken_json, error):
|
||||
prompt = "Fix a broken json string.\n\n" + \
|
||||
"(1) The broken json string need to fix is: \n\n" + \
|
||||
"```" + "\n" + \
|
||||
broken_json + "\n" + \
|
||||
"```" + "\n\n" + \
|
||||
"(2) The error message is: \n\n" + \
|
||||
error + "\n\n" + \
|
||||
"Now, fix this json string. \n\n"
|
||||
return prompt
|
||||
|
||||
def generate_output_auto_repair(self, response, gpt_gen_fn):
|
||||
"""
|
||||
response: string containing canidate json
|
||||
gpt_gen_fn: gpt_gen_fn(inputs, sys_prompt)
|
||||
"""
|
||||
try:
|
||||
result = self.generate_output(response)
|
||||
except Exception as e:
|
||||
try:
|
||||
logging.info(f'Repairing json:{response}')
|
||||
repair_prompt = self.generate_repair_prompt(broken_json = response, error=repr(e))
|
||||
result = self.generate_output(gpt_gen_fn(repair_prompt, self.format_instructions))
|
||||
logging.info('Repaire json success.')
|
||||
except Exception as e:
|
||||
# 没辙了,放弃治疗
|
||||
logging.info('Repaire json fail.')
|
||||
raise JsonStringError('Cannot repair json.', str(e))
|
||||
return result
|
||||
|
||||
447
crazy_functions/latex_fns/latex_actions.py
Normal file
447
crazy_functions/latex_fns/latex_actions.py
Normal file
@@ -0,0 +1,447 @@
|
||||
from toolbox import update_ui, update_ui_lastest_msg # 刷新Gradio前端界面
|
||||
from toolbox import zip_folder, objdump, objload, promote_file_to_downloadzone
|
||||
from .latex_toolbox import PRESERVE, TRANSFORM
|
||||
from .latex_toolbox import set_forbidden_text, set_forbidden_text_begin_end, set_forbidden_text_careful_brace
|
||||
from .latex_toolbox import reverse_forbidden_text_careful_brace, reverse_forbidden_text, convert_to_linklist, post_process
|
||||
from .latex_toolbox import fix_content, find_main_tex_file, merge_tex_files, compile_latex_with_timeout
|
||||
|
||||
import os, shutil
|
||||
import re
|
||||
import numpy as np
|
||||
|
||||
pj = os.path.join
|
||||
|
||||
|
||||
def split_subprocess(txt, project_folder, return_dict, opts):
|
||||
"""
|
||||
break down latex file to a linked list,
|
||||
each node use a preserve flag to indicate whether it should
|
||||
be proccessed by GPT.
|
||||
"""
|
||||
text = txt
|
||||
mask = np.zeros(len(txt), dtype=np.uint8) + TRANSFORM
|
||||
|
||||
# 吸收title与作者以上的部分
|
||||
text, mask = set_forbidden_text(text, mask, r"^(.*?)\\maketitle", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, r"^(.*?)\\begin{document}", re.DOTALL)
|
||||
# 吸收iffalse注释
|
||||
text, mask = set_forbidden_text(text, mask, r"\\iffalse(.*?)\\fi", re.DOTALL)
|
||||
# 吸收在42行以内的begin-end组合
|
||||
text, mask = set_forbidden_text_begin_end(text, mask, r"\\begin\{([a-z\*]*)\}(.*?)\\end\{\1\}", re.DOTALL, limit_n_lines=42)
|
||||
# 吸收匿名公式
|
||||
text, mask = set_forbidden_text(text, mask, [ r"\$\$([^$]+)\$\$", r"\\\[.*?\\\]" ], re.DOTALL)
|
||||
# 吸收其他杂项
|
||||
text, mask = set_forbidden_text(text, mask, [ r"\\section\{(.*?)\}", r"\\section\*\{(.*?)\}", r"\\subsection\{(.*?)\}", r"\\subsubsection\{(.*?)\}" ])
|
||||
text, mask = set_forbidden_text(text, mask, [ r"\\bibliography\{(.*?)\}", r"\\bibliographystyle\{(.*?)\}" ])
|
||||
text, mask = set_forbidden_text(text, mask, r"\\begin\{thebibliography\}.*?\\end\{thebibliography\}", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, r"\\begin\{lstlisting\}(.*?)\\end\{lstlisting\}", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, r"\\begin\{wraptable\}(.*?)\\end\{wraptable\}", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{wrapfigure\}(.*?)\\end\{wrapfigure\}", r"\\begin\{wrapfigure\*\}(.*?)\\end\{wrapfigure\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{figure\}(.*?)\\end\{figure\}", r"\\begin\{figure\*\}(.*?)\\end\{figure\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{multline\}(.*?)\\end\{multline\}", r"\\begin\{multline\*\}(.*?)\\end\{multline\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{table\}(.*?)\\end\{table\}", r"\\begin\{table\*\}(.*?)\\end\{table\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{minipage\}(.*?)\\end\{minipage\}", r"\\begin\{minipage\*\}(.*?)\\end\{minipage\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{align\*\}(.*?)\\end\{align\*\}", r"\\begin\{align\}(.*?)\\end\{align\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{equation\}(.*?)\\end\{equation\}", r"\\begin\{equation\*\}(.*?)\\end\{equation\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\includepdf\[(.*?)\]\{(.*?)\}", r"\\clearpage", r"\\newpage", r"\\appendix", r"\\tableofcontents", r"\\include\{(.*?)\}"])
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\vspace\{(.*?)\}", r"\\hspace\{(.*?)\}", r"\\label\{(.*?)\}", r"\\begin\{(.*?)\}", r"\\end\{(.*?)\}", r"\\item "])
|
||||
text, mask = set_forbidden_text_careful_brace(text, mask, r"\\hl\{(.*?)\}", re.DOTALL)
|
||||
# reverse 操作必须放在最后
|
||||
text, mask = reverse_forbidden_text_careful_brace(text, mask, r"\\caption\{(.*?)\}", re.DOTALL, forbid_wrapper=True)
|
||||
text, mask = reverse_forbidden_text_careful_brace(text, mask, r"\\abstract\{(.*?)\}", re.DOTALL, forbid_wrapper=True)
|
||||
text, mask = reverse_forbidden_text(text, mask, r"\\begin\{abstract\}(.*?)\\end\{abstract\}", re.DOTALL, forbid_wrapper=True)
|
||||
root = convert_to_linklist(text, mask)
|
||||
|
||||
# 最后一步处理,增强稳健性
|
||||
root = post_process(root)
|
||||
|
||||
# 输出html调试文件,用红色标注处保留区(PRESERVE),用黑色标注转换区(TRANSFORM)
|
||||
with open(pj(project_folder, 'debug_log.html'), 'w', encoding='utf8') as f:
|
||||
segment_parts_for_gpt = []
|
||||
nodes = []
|
||||
node = root
|
||||
while True:
|
||||
nodes.append(node)
|
||||
show_html = node.string.replace('\n','<br/>')
|
||||
if not node.preserve:
|
||||
segment_parts_for_gpt.append(node.string)
|
||||
f.write(f'<p style="color:black;">#{node.range}{show_html}#</p>')
|
||||
else:
|
||||
f.write(f'<p style="color:red;">{show_html}</p>')
|
||||
node = node.next
|
||||
if node is None: break
|
||||
|
||||
for n in nodes: n.next = None # break
|
||||
return_dict['nodes'] = nodes
|
||||
return_dict['segment_parts_for_gpt'] = segment_parts_for_gpt
|
||||
return return_dict
|
||||
|
||||
class LatexPaperSplit():
|
||||
"""
|
||||
break down latex file to a linked list,
|
||||
each node use a preserve flag to indicate whether it should
|
||||
be proccessed by GPT.
|
||||
"""
|
||||
def __init__(self) -> None:
|
||||
self.nodes = None
|
||||
self.msg = "*{\\scriptsize\\textbf{警告:该PDF由GPT-Academic开源项目调用大语言模型+Latex翻译插件一键生成," + \
|
||||
"版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \
|
||||
"项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。"
|
||||
# 请您不要删除或修改这行警告,除非您是论文的原作者(如果您是论文原作者,欢迎加REAME中的QQ联系开发者)
|
||||
self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\"
|
||||
|
||||
|
||||
def merge_result(self, arr, mode, msg, buggy_lines=[], buggy_line_surgery_n_lines=10):
|
||||
"""
|
||||
Merge the result after the GPT process completed
|
||||
"""
|
||||
result_string = ""
|
||||
node_cnt = 0
|
||||
line_cnt = 0
|
||||
|
||||
for node in self.nodes:
|
||||
if node.preserve:
|
||||
line_cnt += node.string.count('\n')
|
||||
result_string += node.string
|
||||
else:
|
||||
translated_txt = fix_content(arr[node_cnt], node.string)
|
||||
begin_line = line_cnt
|
||||
end_line = line_cnt + translated_txt.count('\n')
|
||||
|
||||
# reverse translation if any error
|
||||
if any([begin_line-buggy_line_surgery_n_lines <= b_line <= end_line+buggy_line_surgery_n_lines for b_line in buggy_lines]):
|
||||
translated_txt = node.string
|
||||
|
||||
result_string += translated_txt
|
||||
node_cnt += 1
|
||||
line_cnt += translated_txt.count('\n')
|
||||
|
||||
if mode == 'translate_zh':
|
||||
pattern = re.compile(r'\\begin\{abstract\}.*\n')
|
||||
match = pattern.search(result_string)
|
||||
if not match:
|
||||
# match \abstract{xxxx}
|
||||
pattern_compile = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
|
||||
match = pattern_compile.search(result_string)
|
||||
position = match.regs[1][0]
|
||||
else:
|
||||
# match \begin{abstract}xxxx\end{abstract}
|
||||
position = match.end()
|
||||
result_string = result_string[:position] + self.msg + msg + self.msg_declare + result_string[position:]
|
||||
return result_string
|
||||
|
||||
|
||||
def split(self, txt, project_folder, opts):
|
||||
"""
|
||||
break down latex file to a linked list,
|
||||
each node use a preserve flag to indicate whether it should
|
||||
be proccessed by GPT.
|
||||
P.S. use multiprocessing to avoid timeout error
|
||||
"""
|
||||
import multiprocessing
|
||||
manager = multiprocessing.Manager()
|
||||
return_dict = manager.dict()
|
||||
p = multiprocessing.Process(
|
||||
target=split_subprocess,
|
||||
args=(txt, project_folder, return_dict, opts))
|
||||
p.start()
|
||||
p.join()
|
||||
p.close()
|
||||
self.nodes = return_dict['nodes']
|
||||
self.sp = return_dict['segment_parts_for_gpt']
|
||||
return self.sp
|
||||
|
||||
|
||||
class LatexPaperFileGroup():
|
||||
"""
|
||||
use tokenizer to break down text according to max_token_limit
|
||||
"""
|
||||
def __init__(self):
|
||||
self.file_paths = []
|
||||
self.file_contents = []
|
||||
self.sp_file_contents = []
|
||||
self.sp_file_index = []
|
||||
self.sp_file_tag = []
|
||||
|
||||
# count_token
|
||||
from request_llm.bridge_all import model_info
|
||||
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||
self.get_token_num = get_token_num
|
||||
|
||||
def run_file_split(self, max_token_limit=1900):
|
||||
"""
|
||||
use tokenizer to break down text according to max_token_limit
|
||||
"""
|
||||
for index, file_content in enumerate(self.file_contents):
|
||||
if self.get_token_num(file_content) < max_token_limit:
|
||||
self.sp_file_contents.append(file_content)
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index])
|
||||
else:
|
||||
from ..crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
|
||||
for j, segment in enumerate(segments):
|
||||
self.sp_file_contents.append(segment)
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
|
||||
print('Segmentation: done')
|
||||
|
||||
def merge_result(self):
|
||||
self.file_result = ["" for _ in range(len(self.file_paths))]
|
||||
for r, k in zip(self.sp_file_result, self.sp_file_index):
|
||||
self.file_result[k] += r
|
||||
|
||||
def write_result(self):
|
||||
manifest = []
|
||||
for path, res in zip(self.file_paths, self.file_result):
|
||||
with open(path + '.polish.tex', 'w', encoding='utf8') as f:
|
||||
manifest.append(path + '.polish.tex')
|
||||
f.write(res)
|
||||
return manifest
|
||||
|
||||
|
||||
def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, mode='proofread', switch_prompt=None, opts=[]):
|
||||
import time, os, re
|
||||
from ..crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from .latex_actions import LatexPaperFileGroup, LatexPaperSplit
|
||||
|
||||
# <-------- 寻找主tex文件 ---------->
|
||||
maintex = find_main_tex_file(file_manifest, mode)
|
||||
chatbot.append((f"定位主Latex文件", f'[Local Message] 分析结果:该项目的Latex主文件是{maintex}, 如果分析错误, 请立即终止程序, 删除或修改歧义文件, 然后重试。主程序即将开始, 请稍候。'))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
time.sleep(3)
|
||||
|
||||
# <-------- 读取Latex文件, 将多文件tex工程融合为一个巨型tex ---------->
|
||||
main_tex_basename = os.path.basename(maintex)
|
||||
assert main_tex_basename.endswith('.tex')
|
||||
main_tex_basename_bare = main_tex_basename[:-4]
|
||||
may_exist_bbl = pj(project_folder, f'{main_tex_basename_bare}.bbl')
|
||||
if os.path.exists(may_exist_bbl):
|
||||
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge.bbl'))
|
||||
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_{mode}.bbl'))
|
||||
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_diff.bbl'))
|
||||
|
||||
with open(maintex, 'r', encoding='utf-8', errors='replace') as f:
|
||||
content = f.read()
|
||||
merged_content = merge_tex_files(project_folder, content, mode)
|
||||
|
||||
with open(project_folder + '/merge.tex', 'w', encoding='utf-8', errors='replace') as f:
|
||||
f.write(merged_content)
|
||||
|
||||
# <-------- 精细切分latex文件 ---------->
|
||||
chatbot.append((f"Latex文件融合完成", f'[Local Message] 正在精细切分latex文件,这需要一段时间计算,文档越长耗时越长,请耐心等待。'))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
lps = LatexPaperSplit()
|
||||
res = lps.split(merged_content, project_folder, opts) # 消耗时间的函数
|
||||
|
||||
# <-------- 拆分过长的latex片段 ---------->
|
||||
pfg = LatexPaperFileGroup()
|
||||
for index, r in enumerate(res):
|
||||
pfg.file_paths.append('segment-' + str(index))
|
||||
pfg.file_contents.append(r)
|
||||
|
||||
pfg.run_file_split(max_token_limit=1024)
|
||||
n_split = len(pfg.sp_file_contents)
|
||||
|
||||
# <-------- 根据需要切换prompt ---------->
|
||||
inputs_array, sys_prompt_array = switch_prompt(pfg, mode)
|
||||
inputs_show_user_array = [f"{mode} {f}" for f in pfg.sp_file_tag]
|
||||
|
||||
if os.path.exists(pj(project_folder,'temp.pkl')):
|
||||
|
||||
# <-------- 【仅调试】如果存在调试缓存文件,则跳过GPT请求环节 ---------->
|
||||
pfg = objload(file=pj(project_folder,'temp.pkl'))
|
||||
|
||||
else:
|
||||
# <-------- gpt 多线程请求 ---------->
|
||||
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array=inputs_array,
|
||||
inputs_show_user_array=inputs_show_user_array,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history_array=[[""] for _ in range(n_split)],
|
||||
sys_prompt_array=sys_prompt_array,
|
||||
# max_workers=5, # 并行任务数量限制, 最多同时执行5个, 其他的排队等待
|
||||
scroller_max_len = 40
|
||||
)
|
||||
|
||||
# <-------- 文本碎片重组为完整的tex片段 ---------->
|
||||
pfg.sp_file_result = []
|
||||
for i_say, gpt_say, orig_content in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], pfg.sp_file_contents):
|
||||
pfg.sp_file_result.append(gpt_say)
|
||||
pfg.merge_result()
|
||||
|
||||
# <-------- 临时存储用于调试 ---------->
|
||||
pfg.get_token_num = None
|
||||
objdump(pfg, file=pj(project_folder,'temp.pkl'))
|
||||
|
||||
write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder)
|
||||
|
||||
# <-------- 写出文件 ---------->
|
||||
msg = f"当前大语言模型: {llm_kwargs['llm_model']},当前语言模型温度设定: {llm_kwargs['temperature']}。"
|
||||
final_tex = lps.merge_result(pfg.file_result, mode, msg)
|
||||
objdump((lps, pfg.file_result, mode, msg), file=pj(project_folder,'merge_result.pkl'))
|
||||
|
||||
with open(project_folder + f'/merge_{mode}.tex', 'w', encoding='utf-8', errors='replace') as f:
|
||||
if mode != 'translate_zh' or "binary" in final_tex: f.write(final_tex)
|
||||
|
||||
|
||||
# <-------- 整理结果, 退出 ---------->
|
||||
chatbot.append((f"完成了吗?", 'GPT结果已输出, 即将编译PDF'))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# <-------- 返回 ---------->
|
||||
return project_folder + f'/merge_{mode}.tex'
|
||||
|
||||
|
||||
def remove_buggy_lines(file_path, log_path, tex_name, tex_name_pure, n_fix, work_folder_modified, fixed_line=[]):
|
||||
try:
|
||||
with open(log_path, 'r', encoding='utf-8', errors='replace') as f:
|
||||
log = f.read()
|
||||
import re
|
||||
buggy_lines = re.findall(tex_name+':([0-9]{1,5}):', log)
|
||||
buggy_lines = [int(l) for l in buggy_lines]
|
||||
buggy_lines = sorted(buggy_lines)
|
||||
buggy_line = buggy_lines[0]-1
|
||||
print("reversing tex line that has errors", buggy_line)
|
||||
|
||||
# 重组,逆转出错的段落
|
||||
if buggy_line not in fixed_line:
|
||||
fixed_line.append(buggy_line)
|
||||
|
||||
lps, file_result, mode, msg = objload(file=pj(work_folder_modified,'merge_result.pkl'))
|
||||
final_tex = lps.merge_result(file_result, mode, msg, buggy_lines=fixed_line, buggy_line_surgery_n_lines=5*n_fix)
|
||||
|
||||
with open(pj(work_folder_modified, f"{tex_name_pure}_fix_{n_fix}.tex"), 'w', encoding='utf-8', errors='replace') as f:
|
||||
f.write(final_tex)
|
||||
|
||||
return True, f"{tex_name_pure}_fix_{n_fix}", buggy_lines
|
||||
except:
|
||||
print("Fatal error occurred, but we cannot identify error, please download zip, read latex log, and compile manually.")
|
||||
return False, -1, [-1]
|
||||
|
||||
|
||||
def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_folder_original, work_folder_modified, work_folder, mode='default'):
|
||||
import os, time
|
||||
n_fix = 1
|
||||
fixed_line = []
|
||||
max_try = 32
|
||||
chatbot.append([f"正在编译PDF文档", f'编译已经开始。当前工作路径为{work_folder},如果程序停顿5分钟以上,请直接去该路径下取回翻译结果,或者重启之后再度尝试 ...']); yield from update_ui(chatbot=chatbot, history=history)
|
||||
chatbot.append([f"正在编译PDF文档", '...']); yield from update_ui(chatbot=chatbot, history=history); time.sleep(1); chatbot[-1] = list(chatbot[-1]) # 刷新界面
|
||||
yield from update_ui_lastest_msg('编译已经开始...', chatbot, history) # 刷新Gradio前端界面
|
||||
|
||||
while True:
|
||||
import os
|
||||
may_exist_bbl = pj(work_folder_modified, f'merge.bbl')
|
||||
target_bbl = pj(work_folder_modified, f'{main_file_modified}.bbl')
|
||||
if os.path.exists(may_exist_bbl) and not os.path.exists(target_bbl):
|
||||
shutil.copyfile(may_exist_bbl, target_bbl)
|
||||
|
||||
# https://stackoverflow.com/questions/738755/dont-make-me-manually-abort-a-latex-compile-when-theres-an-error
|
||||
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译原始PDF ...', chatbot, history) # 刷新Gradio前端界面
|
||||
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
|
||||
|
||||
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面
|
||||
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
|
||||
|
||||
if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')):
|
||||
# 只有第二步成功,才能继续下面的步骤
|
||||
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面
|
||||
if not os.path.exists(pj(work_folder_original, f'{main_file_original}.bbl')):
|
||||
ok = compile_latex_with_timeout(f'bibtex {main_file_original}.aux', work_folder_original)
|
||||
if not os.path.exists(pj(work_folder_modified, f'{main_file_modified}.bbl')):
|
||||
ok = compile_latex_with_timeout(f'bibtex {main_file_modified}.aux', work_folder_modified)
|
||||
|
||||
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译文献交叉引用 ...', chatbot, history) # 刷新Gradio前端界面
|
||||
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
|
||||
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
|
||||
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex', work_folder_original)
|
||||
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex', work_folder_modified)
|
||||
|
||||
if mode!='translate_zh':
|
||||
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 使用latexdiff生成论文转化前后对比 ...', chatbot, history) # 刷新Gradio前端界面
|
||||
print( f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex')
|
||||
ok = compile_latex_with_timeout(f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex')
|
||||
|
||||
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 正在编译对比PDF ...', chatbot, history) # 刷新Gradio前端界面
|
||||
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
|
||||
ok = compile_latex_with_timeout(f'bibtex merge_diff.aux', work_folder)
|
||||
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
|
||||
ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex', work_folder)
|
||||
|
||||
# <---------- 检查结果 ----------->
|
||||
results_ = ""
|
||||
original_pdf_success = os.path.exists(pj(work_folder_original, f'{main_file_original}.pdf'))
|
||||
modified_pdf_success = os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf'))
|
||||
diff_pdf_success = os.path.exists(pj(work_folder, f'merge_diff.pdf'))
|
||||
results_ += f"原始PDF编译是否成功: {original_pdf_success};"
|
||||
results_ += f"转化PDF编译是否成功: {modified_pdf_success};"
|
||||
results_ += f"对比PDF编译是否成功: {diff_pdf_success};"
|
||||
yield from update_ui_lastest_msg(f'第{n_fix}编译结束:<br/>{results_}...', chatbot, history) # 刷新Gradio前端界面
|
||||
|
||||
if diff_pdf_success:
|
||||
result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path
|
||||
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
||||
if modified_pdf_success:
|
||||
yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 即将退出 ...', chatbot, history) # 刷新Gradio前端界面
|
||||
result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path
|
||||
origin_pdf = pj(work_folder_original, f'{main_file_original}.pdf') # get pdf path
|
||||
if os.path.exists(pj(work_folder, '..', 'translation')):
|
||||
shutil.copyfile(result_pdf, pj(work_folder, '..', 'translation', 'translate_zh.pdf'))
|
||||
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
||||
# 将两个PDF拼接
|
||||
if original_pdf_success:
|
||||
try:
|
||||
from .latex_toolbox import merge_pdfs
|
||||
concat_pdf = pj(work_folder_modified, f'comparison.pdf')
|
||||
merge_pdfs(origin_pdf, result_pdf, concat_pdf)
|
||||
promote_file_to_downloadzone(concat_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
||||
except Exception as e:
|
||||
pass
|
||||
return True # 成功啦
|
||||
else:
|
||||
if n_fix>=max_try: break
|
||||
n_fix += 1
|
||||
can_retry, main_file_modified, buggy_lines = remove_buggy_lines(
|
||||
file_path=pj(work_folder_modified, f'{main_file_modified}.tex'),
|
||||
log_path=pj(work_folder_modified, f'{main_file_modified}.log'),
|
||||
tex_name=f'{main_file_modified}.tex',
|
||||
tex_name_pure=f'{main_file_modified}',
|
||||
n_fix=n_fix,
|
||||
work_folder_modified=work_folder_modified,
|
||||
fixed_line=fixed_line
|
||||
)
|
||||
yield from update_ui_lastest_msg(f'由于最为关键的转化PDF编译失败, 将根据报错信息修正tex源文件并重试, 当前报错的latex代码处于第{buggy_lines}行 ...', chatbot, history) # 刷新Gradio前端界面
|
||||
if not can_retry: break
|
||||
|
||||
return False # 失败啦
|
||||
|
||||
|
||||
def write_html(sp_file_contents, sp_file_result, chatbot, project_folder):
|
||||
# write html
|
||||
try:
|
||||
import shutil
|
||||
from ..crazy_utils import construct_html
|
||||
from toolbox import gen_time_str
|
||||
ch = construct_html()
|
||||
orig = ""
|
||||
trans = ""
|
||||
final = []
|
||||
for c,r in zip(sp_file_contents, sp_file_result):
|
||||
final.append(c)
|
||||
final.append(r)
|
||||
for i, k in enumerate(final):
|
||||
if i%2==0:
|
||||
orig = k
|
||||
if i%2==1:
|
||||
trans = k
|
||||
ch.add_row(a=orig, b=trans)
|
||||
create_report_file_name = f"{gen_time_str()}.trans.html"
|
||||
ch.save_file(create_report_file_name)
|
||||
shutil.copyfile(pj('./gpt_log/', create_report_file_name), pj(project_folder, create_report_file_name))
|
||||
promote_file_to_downloadzone(file=f'./gpt_log/{create_report_file_name}', chatbot=chatbot)
|
||||
except:
|
||||
from toolbox import trimmed_format_exc
|
||||
print('writing html result failed:', trimmed_format_exc())
|
||||
459
crazy_functions/latex_fns/latex_toolbox.py
Normal file
459
crazy_functions/latex_fns/latex_toolbox.py
Normal file
@@ -0,0 +1,459 @@
|
||||
import os, shutil
|
||||
import re
|
||||
import numpy as np
|
||||
PRESERVE = 0
|
||||
TRANSFORM = 1
|
||||
|
||||
pj = os.path.join
|
||||
|
||||
class LinkedListNode():
|
||||
"""
|
||||
Linked List Node
|
||||
"""
|
||||
def __init__(self, string, preserve=True) -> None:
|
||||
self.string = string
|
||||
self.preserve = preserve
|
||||
self.next = None
|
||||
self.range = None
|
||||
# self.begin_line = 0
|
||||
# self.begin_char = 0
|
||||
|
||||
def convert_to_linklist(text, mask):
|
||||
root = LinkedListNode("", preserve=True)
|
||||
current_node = root
|
||||
for c, m, i in zip(text, mask, range(len(text))):
|
||||
if (m==PRESERVE and current_node.preserve) \
|
||||
or (m==TRANSFORM and not current_node.preserve):
|
||||
# add
|
||||
current_node.string += c
|
||||
else:
|
||||
current_node.next = LinkedListNode(c, preserve=(m==PRESERVE))
|
||||
current_node = current_node.next
|
||||
return root
|
||||
|
||||
def post_process(root):
|
||||
# 修复括号
|
||||
node = root
|
||||
while True:
|
||||
string = node.string
|
||||
if node.preserve:
|
||||
node = node.next
|
||||
if node is None: break
|
||||
continue
|
||||
def break_check(string):
|
||||
str_stack = [""] # (lv, index)
|
||||
for i, c in enumerate(string):
|
||||
if c == '{':
|
||||
str_stack.append('{')
|
||||
elif c == '}':
|
||||
if len(str_stack) == 1:
|
||||
print('stack fix')
|
||||
return i
|
||||
str_stack.pop(-1)
|
||||
else:
|
||||
str_stack[-1] += c
|
||||
return -1
|
||||
bp = break_check(string)
|
||||
|
||||
if bp == -1:
|
||||
pass
|
||||
elif bp == 0:
|
||||
node.string = string[:1]
|
||||
q = LinkedListNode(string[1:], False)
|
||||
q.next = node.next
|
||||
node.next = q
|
||||
else:
|
||||
node.string = string[:bp]
|
||||
q = LinkedListNode(string[bp:], False)
|
||||
q.next = node.next
|
||||
node.next = q
|
||||
|
||||
node = node.next
|
||||
if node is None: break
|
||||
|
||||
# 屏蔽空行和太短的句子
|
||||
node = root
|
||||
while True:
|
||||
if len(node.string.strip('\n').strip(''))==0: node.preserve = True
|
||||
if len(node.string.strip('\n').strip(''))<42: node.preserve = True
|
||||
node = node.next
|
||||
if node is None: break
|
||||
node = root
|
||||
while True:
|
||||
if node.next and node.preserve and node.next.preserve:
|
||||
node.string += node.next.string
|
||||
node.next = node.next.next
|
||||
node = node.next
|
||||
if node is None: break
|
||||
|
||||
# 将前后断行符脱离
|
||||
node = root
|
||||
prev_node = None
|
||||
while True:
|
||||
if not node.preserve:
|
||||
lstriped_ = node.string.lstrip().lstrip('\n')
|
||||
if (prev_node is not None) and (prev_node.preserve) and (len(lstriped_)!=len(node.string)):
|
||||
prev_node.string += node.string[:-len(lstriped_)]
|
||||
node.string = lstriped_
|
||||
rstriped_ = node.string.rstrip().rstrip('\n')
|
||||
if (node.next is not None) and (node.next.preserve) and (len(rstriped_)!=len(node.string)):
|
||||
node.next.string = node.string[len(rstriped_):] + node.next.string
|
||||
node.string = rstriped_
|
||||
# =====
|
||||
prev_node = node
|
||||
node = node.next
|
||||
if node is None: break
|
||||
|
||||
# 标注节点的行数范围
|
||||
node = root
|
||||
n_line = 0
|
||||
expansion = 2
|
||||
while True:
|
||||
n_l = node.string.count('\n')
|
||||
node.range = [n_line-expansion, n_line+n_l+expansion] # 失败时,扭转的范围
|
||||
n_line = n_line+n_l
|
||||
node = node.next
|
||||
if node is None: break
|
||||
return root
|
||||
|
||||
|
||||
"""
|
||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
Latex segmentation with a binary mask (PRESERVE=0, TRANSFORM=1)
|
||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
"""
|
||||
|
||||
|
||||
def set_forbidden_text(text, mask, pattern, flags=0):
|
||||
"""
|
||||
Add a preserve text area in this paper
|
||||
e.g. with pattern = r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}"
|
||||
you can mask out (mask = PRESERVE so that text become untouchable for GPT)
|
||||
everything between "\begin{equation}" and "\end{equation}"
|
||||
"""
|
||||
if isinstance(pattern, list): pattern = '|'.join(pattern)
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
for res in pattern_compile.finditer(text):
|
||||
mask[res.span()[0]:res.span()[1]] = PRESERVE
|
||||
return text, mask
|
||||
|
||||
def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True):
|
||||
"""
|
||||
Move area out of preserve area (make text editable for GPT)
|
||||
count the number of the braces so as to catch compelete text area.
|
||||
e.g.
|
||||
\begin{abstract} blablablablablabla. \end{abstract}
|
||||
"""
|
||||
if isinstance(pattern, list): pattern = '|'.join(pattern)
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
for res in pattern_compile.finditer(text):
|
||||
if not forbid_wrapper:
|
||||
mask[res.span()[0]:res.span()[1]] = TRANSFORM
|
||||
else:
|
||||
mask[res.regs[0][0]: res.regs[1][0]] = PRESERVE # '\\begin{abstract}'
|
||||
mask[res.regs[1][0]: res.regs[1][1]] = TRANSFORM # abstract
|
||||
mask[res.regs[1][1]: res.regs[0][1]] = PRESERVE # abstract
|
||||
return text, mask
|
||||
|
||||
def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
|
||||
"""
|
||||
Add a preserve text area in this paper (text become untouchable for GPT).
|
||||
count the number of the braces so as to catch compelete text area.
|
||||
e.g.
|
||||
\caption{blablablablabla\texbf{blablabla}blablabla.}
|
||||
"""
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
for res in pattern_compile.finditer(text):
|
||||
brace_level = -1
|
||||
p = begin = end = res.regs[0][0]
|
||||
for _ in range(1024*16):
|
||||
if text[p] == '}' and brace_level == 0: break
|
||||
elif text[p] == '}': brace_level -= 1
|
||||
elif text[p] == '{': brace_level += 1
|
||||
p += 1
|
||||
end = p+1
|
||||
mask[begin:end] = PRESERVE
|
||||
return text, mask
|
||||
|
||||
def reverse_forbidden_text_careful_brace(text, mask, pattern, flags=0, forbid_wrapper=True):
|
||||
"""
|
||||
Move area out of preserve area (make text editable for GPT)
|
||||
count the number of the braces so as to catch compelete text area.
|
||||
e.g.
|
||||
\caption{blablablablabla\texbf{blablabla}blablabla.}
|
||||
"""
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
for res in pattern_compile.finditer(text):
|
||||
brace_level = 0
|
||||
p = begin = end = res.regs[1][0]
|
||||
for _ in range(1024*16):
|
||||
if text[p] == '}' and brace_level == 0: break
|
||||
elif text[p] == '}': brace_level -= 1
|
||||
elif text[p] == '{': brace_level += 1
|
||||
p += 1
|
||||
end = p
|
||||
mask[begin:end] = TRANSFORM
|
||||
if forbid_wrapper:
|
||||
mask[res.regs[0][0]:begin] = PRESERVE
|
||||
mask[end:res.regs[0][1]] = PRESERVE
|
||||
return text, mask
|
||||
|
||||
def set_forbidden_text_begin_end(text, mask, pattern, flags=0, limit_n_lines=42):
|
||||
"""
|
||||
Find all \begin{} ... \end{} text block that with less than limit_n_lines lines.
|
||||
Add it to preserve area
|
||||
"""
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
def search_with_line_limit(text, mask):
|
||||
for res in pattern_compile.finditer(text):
|
||||
cmd = res.group(1) # begin{what}
|
||||
this = res.group(2) # content between begin and end
|
||||
this_mask = mask[res.regs[2][0]:res.regs[2][1]]
|
||||
white_list = ['document', 'abstract', 'lemma', 'definition', 'sproof',
|
||||
'em', 'emph', 'textit', 'textbf', 'itemize', 'enumerate']
|
||||
if (cmd in white_list) or this.count('\n') >= limit_n_lines: # use a magical number 42
|
||||
this, this_mask = search_with_line_limit(this, this_mask)
|
||||
mask[res.regs[2][0]:res.regs[2][1]] = this_mask
|
||||
else:
|
||||
mask[res.regs[0][0]:res.regs[0][1]] = PRESERVE
|
||||
return text, mask
|
||||
return search_with_line_limit(text, mask)
|
||||
|
||||
|
||||
|
||||
"""
|
||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
Latex Merge File
|
||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
"""
|
||||
|
||||
def find_main_tex_file(file_manifest, mode):
|
||||
"""
|
||||
在多Tex文档中,寻找主文件,必须包含documentclass,返回找到的第一个。
|
||||
P.S. 但愿没人把latex模板放在里面传进来 (6.25 加入判定latex模板的代码)
|
||||
"""
|
||||
canidates = []
|
||||
for texf in file_manifest:
|
||||
if os.path.basename(texf).startswith('merge'):
|
||||
continue
|
||||
with open(texf, 'r', encoding='utf8', errors='ignore') as f:
|
||||
file_content = f.read()
|
||||
if r'\documentclass' in file_content:
|
||||
canidates.append(texf)
|
||||
else:
|
||||
continue
|
||||
|
||||
if len(canidates) == 0:
|
||||
raise RuntimeError('无法找到一个主Tex文件(包含documentclass关键字)')
|
||||
elif len(canidates) == 1:
|
||||
return canidates[0]
|
||||
else: # if len(canidates) >= 2 通过一些Latex模板中常见(但通常不会出现在正文)的单词,对不同latex源文件扣分,取评分最高者返回
|
||||
canidates_score = []
|
||||
# 给出一些判定模板文档的词作为扣分项
|
||||
unexpected_words = ['\LaTeX', 'manuscript', 'Guidelines', 'font', 'citations', 'rejected', 'blind review', 'reviewers']
|
||||
expected_words = ['\input', '\ref', '\cite']
|
||||
for texf in canidates:
|
||||
canidates_score.append(0)
|
||||
with open(texf, 'r', encoding='utf8', errors='ignore') as f:
|
||||
file_content = f.read()
|
||||
for uw in unexpected_words:
|
||||
if uw in file_content:
|
||||
canidates_score[-1] -= 1
|
||||
for uw in expected_words:
|
||||
if uw in file_content:
|
||||
canidates_score[-1] += 1
|
||||
select = np.argmax(canidates_score) # 取评分最高者返回
|
||||
return canidates[select]
|
||||
|
||||
def rm_comments(main_file):
|
||||
new_file_remove_comment_lines = []
|
||||
for l in main_file.splitlines():
|
||||
# 删除整行的空注释
|
||||
if l.lstrip().startswith("%"):
|
||||
pass
|
||||
else:
|
||||
new_file_remove_comment_lines.append(l)
|
||||
main_file = '\n'.join(new_file_remove_comment_lines)
|
||||
# main_file = re.sub(r"\\include{(.*?)}", r"\\input{\1}", main_file) # 将 \include 命令转换为 \input 命令
|
||||
main_file = re.sub(r'(?<!\\)%.*', '', main_file) # 使用正则表达式查找半行注释, 并替换为空字符串
|
||||
return main_file
|
||||
|
||||
def find_tex_file_ignore_case(fp):
|
||||
dir_name = os.path.dirname(fp)
|
||||
base_name = os.path.basename(fp)
|
||||
# 如果输入的文件路径是正确的
|
||||
if os.path.exists(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
||||
# 如果不正确,试着加上.tex后缀试试
|
||||
if not base_name.endswith('.tex'): base_name+='.tex'
|
||||
if os.path.exists(pj(dir_name, base_name)): return pj(dir_name, base_name)
|
||||
# 如果还找不到,解除大小写限制,再试一次
|
||||
import glob
|
||||
for f in glob.glob(dir_name+'/*.tex'):
|
||||
base_name_s = os.path.basename(fp)
|
||||
if base_name_s.lower() == base_name.lower(): return f
|
||||
return None
|
||||
|
||||
def merge_tex_files_(project_foler, main_file, mode):
|
||||
"""
|
||||
Merge Tex project recrusively
|
||||
"""
|
||||
main_file = rm_comments(main_file)
|
||||
for s in reversed([q for q in re.finditer(r"\\input\{(.*?)\}", main_file, re.M)]):
|
||||
f = s.group(1)
|
||||
fp = os.path.join(project_foler, f)
|
||||
fp = find_tex_file_ignore_case(fp)
|
||||
if fp:
|
||||
with open(fp, 'r', encoding='utf-8', errors='replace') as fx: c = fx.read()
|
||||
else:
|
||||
raise RuntimeError(f'找不到{fp},Tex源文件缺失!')
|
||||
c = merge_tex_files_(project_foler, c, mode)
|
||||
main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:]
|
||||
return main_file
|
||||
|
||||
def merge_tex_files(project_foler, main_file, mode):
|
||||
"""
|
||||
Merge Tex project recrusively
|
||||
P.S. 顺便把CTEX塞进去以支持中文
|
||||
P.S. 顺便把Latex的注释去除
|
||||
"""
|
||||
main_file = merge_tex_files_(project_foler, main_file, mode)
|
||||
main_file = rm_comments(main_file)
|
||||
|
||||
if mode == 'translate_zh':
|
||||
# find paper documentclass
|
||||
pattern = re.compile(r'\\documentclass.*\n')
|
||||
match = pattern.search(main_file)
|
||||
assert match is not None, "Cannot find documentclass statement!"
|
||||
position = match.end()
|
||||
add_ctex = '\\usepackage{ctex}\n'
|
||||
add_url = '\\usepackage{url}\n' if '{url}' not in main_file else ''
|
||||
main_file = main_file[:position] + add_ctex + add_url + main_file[position:]
|
||||
# fontset=windows
|
||||
import platform
|
||||
main_file = re.sub(r"\\documentclass\[(.*?)\]{(.*?)}", r"\\documentclass[\1,fontset=windows,UTF8]{\2}",main_file)
|
||||
main_file = re.sub(r"\\documentclass{(.*?)}", r"\\documentclass[fontset=windows,UTF8]{\1}",main_file)
|
||||
# find paper abstract
|
||||
pattern_opt1 = re.compile(r'\\begin\{abstract\}.*\n')
|
||||
pattern_opt2 = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
|
||||
match_opt1 = pattern_opt1.search(main_file)
|
||||
match_opt2 = pattern_opt2.search(main_file)
|
||||
assert (match_opt1 is not None) or (match_opt2 is not None), "Cannot find paper abstract section!"
|
||||
return main_file
|
||||
|
||||
|
||||
"""
|
||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
Post process
|
||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
"""
|
||||
def mod_inbraket(match):
|
||||
"""
|
||||
为啥chatgpt会把cite里面的逗号换成中文逗号呀
|
||||
"""
|
||||
# get the matched string
|
||||
cmd = match.group(1)
|
||||
str_to_modify = match.group(2)
|
||||
# modify the matched string
|
||||
str_to_modify = str_to_modify.replace(':', ':') # 前面是中文冒号,后面是英文冒号
|
||||
str_to_modify = str_to_modify.replace(',', ',') # 前面是中文逗号,后面是英文逗号
|
||||
# str_to_modify = 'BOOM'
|
||||
return "\\" + cmd + "{" + str_to_modify + "}"
|
||||
|
||||
def fix_content(final_tex, node_string):
|
||||
"""
|
||||
Fix common GPT errors to increase success rate
|
||||
"""
|
||||
final_tex = re.sub(r"(?<!\\)%", "\\%", final_tex)
|
||||
final_tex = re.sub(r"\\([a-z]{2,10})\ \{", r"\\\1{", string=final_tex)
|
||||
final_tex = re.sub(r"\\\ ([a-z]{2,10})\{", r"\\\1{", string=final_tex)
|
||||
final_tex = re.sub(r"\\([a-z]{2,10})\{([^\}]*?)\}", mod_inbraket, string=final_tex)
|
||||
|
||||
if "Traceback" in final_tex and "[Local Message]" in final_tex:
|
||||
final_tex = node_string # 出问题了,还原原文
|
||||
if node_string.count('\\begin') != final_tex.count('\\begin'):
|
||||
final_tex = node_string # 出问题了,还原原文
|
||||
if node_string.count('\_') > 0 and node_string.count('\_') > final_tex.count('\_'):
|
||||
# walk and replace any _ without \
|
||||
final_tex = re.sub(r"(?<!\\)_", "\\_", final_tex)
|
||||
|
||||
def compute_brace_level(string):
|
||||
# this function count the number of { and }
|
||||
brace_level = 0
|
||||
for c in string:
|
||||
if c == "{": brace_level += 1
|
||||
elif c == "}": brace_level -= 1
|
||||
return brace_level
|
||||
def join_most(tex_t, tex_o):
|
||||
# this function join translated string and original string when something goes wrong
|
||||
p_t = 0
|
||||
p_o = 0
|
||||
def find_next(string, chars, begin):
|
||||
p = begin
|
||||
while p < len(string):
|
||||
if string[p] in chars: return p, string[p]
|
||||
p += 1
|
||||
return None, None
|
||||
while True:
|
||||
res1, char = find_next(tex_o, ['{','}'], p_o)
|
||||
if res1 is None: break
|
||||
res2, char = find_next(tex_t, [char], p_t)
|
||||
if res2 is None: break
|
||||
p_o = res1 + 1
|
||||
p_t = res2 + 1
|
||||
return tex_t[:p_t] + tex_o[p_o:]
|
||||
|
||||
if compute_brace_level(final_tex) != compute_brace_level(node_string):
|
||||
# 出问题了,还原部分原文,保证括号正确
|
||||
final_tex = join_most(final_tex, node_string)
|
||||
return final_tex
|
||||
|
||||
def compile_latex_with_timeout(command, cwd, timeout=60):
|
||||
import subprocess
|
||||
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)
|
||||
try:
|
||||
stdout, stderr = process.communicate(timeout=timeout)
|
||||
except subprocess.TimeoutExpired:
|
||||
process.kill()
|
||||
stdout, stderr = process.communicate()
|
||||
print("Process timed out!")
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
|
||||
def merge_pdfs(pdf1_path, pdf2_path, output_path):
|
||||
import PyPDF2
|
||||
Percent = 0.8
|
||||
# Open the first PDF file
|
||||
with open(pdf1_path, 'rb') as pdf1_file:
|
||||
pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
|
||||
# Open the second PDF file
|
||||
with open(pdf2_path, 'rb') as pdf2_file:
|
||||
pdf2_reader = PyPDF2.PdfFileReader(pdf2_file)
|
||||
# Create a new PDF file to store the merged pages
|
||||
output_writer = PyPDF2.PdfFileWriter()
|
||||
# Determine the number of pages in each PDF file
|
||||
num_pages = max(pdf1_reader.numPages, pdf2_reader.numPages)
|
||||
# Merge the pages from the two PDF files
|
||||
for page_num in range(num_pages):
|
||||
# Add the page from the first PDF file
|
||||
if page_num < pdf1_reader.numPages:
|
||||
page1 = pdf1_reader.getPage(page_num)
|
||||
else:
|
||||
page1 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
||||
# Add the page from the second PDF file
|
||||
if page_num < pdf2_reader.numPages:
|
||||
page2 = pdf2_reader.getPage(page_num)
|
||||
else:
|
||||
page2 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
||||
# Create a new empty page with double width
|
||||
new_page = PyPDF2.PageObject.createBlankPage(
|
||||
width = int(int(page1.mediaBox.getWidth()) + int(page2.mediaBox.getWidth()) * Percent),
|
||||
height = max(page1.mediaBox.getHeight(), page2.mediaBox.getHeight())
|
||||
)
|
||||
new_page.mergeTranslatedPage(page1, 0, 0)
|
||||
new_page.mergeTranslatedPage(page2, int(int(page1.mediaBox.getWidth())-int(page2.mediaBox.getWidth())* (1-Percent)), 0)
|
||||
output_writer.addPage(new_page)
|
||||
# Save the merged PDF file
|
||||
with open(output_path, 'wb') as output_file:
|
||||
output_writer.write(output_file)
|
||||
130
crazy_functions/live_audio/aliyunASR.py
Normal file
130
crazy_functions/live_audio/aliyunASR.py
Normal file
@@ -0,0 +1,130 @@
|
||||
import time, threading, json
|
||||
|
||||
|
||||
class AliyunASR():
|
||||
|
||||
def test_on_sentence_begin(self, message, *args):
|
||||
# print("test_on_sentence_begin:{}".format(message))
|
||||
pass
|
||||
|
||||
def test_on_sentence_end(self, message, *args):
|
||||
# print("test_on_sentence_end:{}".format(message))
|
||||
message = json.loads(message)
|
||||
self.parsed_sentence = message['payload']['result']
|
||||
self.event_on_entence_end.set()
|
||||
print(self.parsed_sentence)
|
||||
|
||||
def test_on_start(self, message, *args):
|
||||
# print("test_on_start:{}".format(message))
|
||||
pass
|
||||
|
||||
def test_on_error(self, message, *args):
|
||||
print("on_error args=>{}".format(args))
|
||||
pass
|
||||
|
||||
def test_on_close(self, *args):
|
||||
self.aliyun_service_ok = False
|
||||
pass
|
||||
|
||||
def test_on_result_chg(self, message, *args):
|
||||
# print("test_on_chg:{}".format(message))
|
||||
message = json.loads(message)
|
||||
self.parsed_text = message['payload']['result']
|
||||
self.event_on_result_chg.set()
|
||||
|
||||
def test_on_completed(self, message, *args):
|
||||
# print("on_completed:args=>{} message=>{}".format(args, message))
|
||||
pass
|
||||
|
||||
|
||||
def audio_convertion_thread(self, uuid):
|
||||
# 在一个异步线程中采集音频
|
||||
import nls # pip install git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
||||
import tempfile
|
||||
from scipy import io
|
||||
from toolbox import get_conf
|
||||
from .audio_io import change_sample_rate
|
||||
from .audio_io import RealtimeAudioDistribution
|
||||
NEW_SAMPLERATE = 16000
|
||||
rad = RealtimeAudioDistribution()
|
||||
rad.clean_up()
|
||||
temp_folder = tempfile.gettempdir()
|
||||
TOKEN, APPKEY = get_conf('ALIYUN_TOKEN', 'ALIYUN_APPKEY')
|
||||
if len(TOKEN) == 0:
|
||||
TOKEN = self.get_token()
|
||||
self.aliyun_service_ok = True
|
||||
URL="wss://nls-gateway.aliyuncs.com/ws/v1"
|
||||
sr = nls.NlsSpeechTranscriber(
|
||||
url=URL,
|
||||
token=TOKEN,
|
||||
appkey=APPKEY,
|
||||
on_sentence_begin=self.test_on_sentence_begin,
|
||||
on_sentence_end=self.test_on_sentence_end,
|
||||
on_start=self.test_on_start,
|
||||
on_result_changed=self.test_on_result_chg,
|
||||
on_completed=self.test_on_completed,
|
||||
on_error=self.test_on_error,
|
||||
on_close=self.test_on_close,
|
||||
callback_args=[uuid.hex]
|
||||
)
|
||||
|
||||
r = sr.start(aformat="pcm",
|
||||
enable_intermediate_result=True,
|
||||
enable_punctuation_prediction=True,
|
||||
enable_inverse_text_normalization=True)
|
||||
|
||||
while not self.stop:
|
||||
# time.sleep(self.capture_interval)
|
||||
audio = rad.read(uuid.hex)
|
||||
if audio is not None:
|
||||
# convert to pcm file
|
||||
temp_file = f'{temp_folder}/{uuid.hex}.pcm' #
|
||||
dsdata = change_sample_rate(audio, rad.rate, NEW_SAMPLERATE) # 48000 --> 16000
|
||||
io.wavfile.write(temp_file, NEW_SAMPLERATE, dsdata)
|
||||
# read pcm binary
|
||||
with open(temp_file, "rb") as f: data = f.read()
|
||||
# print('audio len:', len(audio), '\t ds len:', len(dsdata), '\t need n send:', len(data)//640)
|
||||
slices = zip(*(iter(data),) * 640) # 640个字节为一组
|
||||
for i in slices: sr.send_audio(bytes(i))
|
||||
else:
|
||||
time.sleep(0.1)
|
||||
|
||||
if not self.aliyun_service_ok:
|
||||
self.stop = True
|
||||
self.stop_msg = 'Aliyun音频服务异常,请检查ALIYUN_TOKEN和ALIYUN_APPKEY是否过期。'
|
||||
r = sr.stop()
|
||||
|
||||
def get_token(self):
|
||||
from toolbox import get_conf
|
||||
import json
|
||||
from aliyunsdkcore.request import CommonRequest
|
||||
from aliyunsdkcore.client import AcsClient
|
||||
AccessKey_ID, AccessKey_secret = get_conf('ALIYUN_ACCESSKEY', 'ALIYUN_SECRET')
|
||||
|
||||
# 创建AcsClient实例
|
||||
client = AcsClient(
|
||||
AccessKey_ID,
|
||||
AccessKey_secret,
|
||||
"cn-shanghai"
|
||||
)
|
||||
|
||||
# 创建request,并设置参数。
|
||||
request = CommonRequest()
|
||||
request.set_method('POST')
|
||||
request.set_domain('nls-meta.cn-shanghai.aliyuncs.com')
|
||||
request.set_version('2019-02-28')
|
||||
request.set_action_name('CreateToken')
|
||||
|
||||
try:
|
||||
response = client.do_action_with_exception(request)
|
||||
print(response)
|
||||
jss = json.loads(response)
|
||||
if 'Token' in jss and 'Id' in jss['Token']:
|
||||
token = jss['Token']['Id']
|
||||
expireTime = jss['Token']['ExpireTime']
|
||||
print("token = " + token)
|
||||
print("expireTime = " + str(expireTime))
|
||||
except Exception as e:
|
||||
print(e)
|
||||
|
||||
return token
|
||||
51
crazy_functions/live_audio/audio_io.py
Normal file
51
crazy_functions/live_audio/audio_io.py
Normal file
@@ -0,0 +1,51 @@
|
||||
import numpy as np
|
||||
from scipy import interpolate
|
||||
|
||||
def Singleton(cls):
|
||||
_instance = {}
|
||||
|
||||
def _singleton(*args, **kargs):
|
||||
if cls not in _instance:
|
||||
_instance[cls] = cls(*args, **kargs)
|
||||
return _instance[cls]
|
||||
|
||||
return _singleton
|
||||
|
||||
|
||||
@Singleton
|
||||
class RealtimeAudioDistribution():
|
||||
def __init__(self) -> None:
|
||||
self.data = {}
|
||||
self.max_len = 1024*1024
|
||||
self.rate = 48000 # 只读,每秒采样数量
|
||||
|
||||
def clean_up(self):
|
||||
self.data = {}
|
||||
|
||||
def feed(self, uuid, audio):
|
||||
self.rate, audio_ = audio
|
||||
# print('feed', len(audio_), audio_[-25:])
|
||||
if uuid not in self.data:
|
||||
self.data[uuid] = audio_
|
||||
else:
|
||||
new_arr = np.concatenate((self.data[uuid], audio_))
|
||||
if len(new_arr) > self.max_len: new_arr = new_arr[-self.max_len:]
|
||||
self.data[uuid] = new_arr
|
||||
|
||||
def read(self, uuid):
|
||||
if uuid in self.data:
|
||||
res = self.data.pop(uuid)
|
||||
print('\r read-', len(res), '-', max(res), end='', flush=True)
|
||||
else:
|
||||
res = None
|
||||
return res
|
||||
|
||||
def change_sample_rate(audio, old_sr, new_sr):
|
||||
duration = audio.shape[0] / old_sr
|
||||
|
||||
time_old = np.linspace(0, duration, audio.shape[0])
|
||||
time_new = np.linspace(0, duration, int(audio.shape[0] * new_sr / old_sr))
|
||||
|
||||
interpolator = interpolate.interp1d(time_old, audio.T)
|
||||
new_audio = interpolator(time_new).T
|
||||
return new_audio.astype(np.int16)
|
||||
25
crazy_functions/pdf_fns/parse_pdf.py
Normal file
25
crazy_functions/pdf_fns/parse_pdf.py
Normal file
@@ -0,0 +1,25 @@
|
||||
import requests
|
||||
import random
|
||||
from functools import lru_cache
|
||||
class GROBID_OFFLINE_EXCEPTION(Exception): pass
|
||||
|
||||
def get_avail_grobid_url():
|
||||
from toolbox import get_conf
|
||||
GROBID_URLS, = get_conf('GROBID_URLS')
|
||||
if len(GROBID_URLS) == 0: return None
|
||||
try:
|
||||
_grobid_url = random.choice(GROBID_URLS) # 随机负载均衡
|
||||
if _grobid_url.endswith('/'): _grobid_url = _grobid_url.rstrip('/')
|
||||
res = requests.get(_grobid_url+'/api/isalive')
|
||||
if res.text=='true': return _grobid_url
|
||||
else: return None
|
||||
except:
|
||||
return None
|
||||
|
||||
@lru_cache(maxsize=32)
|
||||
def parse_pdf(pdf_path, grobid_url):
|
||||
import scipdf # pip install scipdf_parser
|
||||
if grobid_url.endswith('/'): grobid_url = grobid_url.rstrip('/')
|
||||
article_dict = scipdf.parse_pdf_to_dict(pdf_path, grobid_url=grobid_url)
|
||||
return article_dict
|
||||
|
||||
@@ -1,87 +0,0 @@
|
||||
#include "libipc/buffer.h"
|
||||
#include "libipc/utility/pimpl.h"
|
||||
|
||||
#include <cstring>
|
||||
|
||||
namespace ipc {
|
||||
|
||||
bool operator==(buffer const & b1, buffer const & b2) {
|
||||
return (b1.size() == b2.size()) && (std::memcmp(b1.data(), b2.data(), b1.size()) == 0);
|
||||
}
|
||||
|
||||
bool operator!=(buffer const & b1, buffer const & b2) {
|
||||
return !(b1 == b2);
|
||||
}
|
||||
|
||||
class buffer::buffer_ : public pimpl<buffer_> {
|
||||
public:
|
||||
void* p_;
|
||||
std::size_t s_;
|
||||
void* a_;
|
||||
buffer::destructor_t d_;
|
||||
|
||||
buffer_(void* p, std::size_t s, buffer::destructor_t d, void* a)
|
||||
: p_(p), s_(s), a_(a), d_(d) {
|
||||
}
|
||||
|
||||
~buffer_() {
|
||||
if (d_ == nullptr) return;
|
||||
d_((a_ == nullptr) ? p_ : a_, s_);
|
||||
}
|
||||
};
|
||||
|
||||
buffer::buffer()
|
||||
: buffer(nullptr, 0, nullptr, nullptr) {
|
||||
}
|
||||
|
||||
buffer::buffer(void* p, std::size_t s, destructor_t d)
|
||||
: p_(p_->make(p, s, d, nullptr)) {
|
||||
}
|
||||
|
||||
buffer::buffer(void* p, std::size_t s, destructor_t d, void* additional)
|
||||
: p_(p_->make(p, s, d, additional)) {
|
||||
}
|
||||
|
||||
buffer::buffer(void* p, std::size_t s)
|
||||
: buffer(p, s, nullptr) {
|
||||
}
|
||||
|
||||
buffer::buffer(char const & c)
|
||||
: buffer(const_cast<char*>(&c), 1) {
|
||||
}
|
||||
|
||||
buffer::buffer(buffer&& rhs)
|
||||
: buffer() {
|
||||
swap(rhs);
|
||||
}
|
||||
|
||||
buffer::~buffer() {
|
||||
p_->clear();
|
||||
}
|
||||
|
||||
void buffer::swap(buffer& rhs) {
|
||||
std::swap(p_, rhs.p_);
|
||||
}
|
||||
|
||||
buffer& buffer::operator=(buffer rhs) {
|
||||
swap(rhs);
|
||||
return *this;
|
||||
}
|
||||
|
||||
bool buffer::empty() const noexcept {
|
||||
return (impl(p_)->p_ == nullptr) || (impl(p_)->s_ == 0);
|
||||
}
|
||||
|
||||
void* buffer::data() noexcept {
|
||||
return impl(p_)->p_;
|
||||
}
|
||||
|
||||
void const * buffer::data() const noexcept {
|
||||
return impl(p_)->p_;
|
||||
}
|
||||
|
||||
std::size_t buffer::size() const noexcept {
|
||||
return impl(p_)->s_;
|
||||
}
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,701 +0,0 @@
|
||||
|
||||
#include <type_traits>
|
||||
#include <cstring>
|
||||
#include <algorithm>
|
||||
#include <utility> // std::pair, std::move, std::forward
|
||||
#include <atomic>
|
||||
#include <type_traits> // aligned_storage_t
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <array>
|
||||
#include <cassert>
|
||||
|
||||
#include "libipc/ipc.h"
|
||||
#include "libipc/def.h"
|
||||
#include "libipc/shm.h"
|
||||
#include "libipc/pool_alloc.h"
|
||||
#include "libipc/queue.h"
|
||||
#include "libipc/policy.h"
|
||||
#include "libipc/rw_lock.h"
|
||||
#include "libipc/waiter.h"
|
||||
|
||||
#include "libipc/utility/log.h"
|
||||
#include "libipc/utility/id_pool.h"
|
||||
#include "libipc/utility/scope_guard.h"
|
||||
#include "libipc/utility/utility.h"
|
||||
|
||||
#include "libipc/memory/resource.h"
|
||||
#include "libipc/platform/detail.h"
|
||||
#include "libipc/circ/elem_array.h"
|
||||
|
||||
namespace {
|
||||
|
||||
using msg_id_t = std::uint32_t;
|
||||
using acc_t = std::atomic<msg_id_t>;
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct msg_t;
|
||||
|
||||
template <std::size_t AlignSize>
|
||||
struct msg_t<0, AlignSize> {
|
||||
msg_id_t cc_id_;
|
||||
msg_id_t id_;
|
||||
std::int32_t remain_;
|
||||
bool storage_;
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct msg_t : msg_t<0, AlignSize> {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
|
||||
msg_t() = default;
|
||||
msg_t(msg_id_t cc_id, msg_id_t id, std::int32_t remain, void const * data, std::size_t size)
|
||||
: msg_t<0, AlignSize> {cc_id, id, remain, (data == nullptr) || (size == 0)} {
|
||||
if (this->storage_) {
|
||||
if (data != nullptr) {
|
||||
// copy storage-id
|
||||
*reinterpret_cast<ipc::storage_id_t*>(&data_) =
|
||||
*static_cast<ipc::storage_id_t const *>(data);
|
||||
}
|
||||
}
|
||||
else std::memcpy(&data_, data, size);
|
||||
}
|
||||
};
|
||||
|
||||
template <typename T>
|
||||
ipc::buff_t make_cache(T& data, std::size_t size) {
|
||||
auto ptr = ipc::mem::alloc(size);
|
||||
std::memcpy(ptr, &data, (ipc::detail::min)(sizeof(data), size));
|
||||
return { ptr, size, ipc::mem::free };
|
||||
}
|
||||
|
||||
struct cache_t {
|
||||
std::size_t fill_;
|
||||
ipc::buff_t buff_;
|
||||
|
||||
cache_t(std::size_t f, ipc::buff_t && b)
|
||||
: fill_(f), buff_(std::move(b))
|
||||
{}
|
||||
|
||||
void append(void const * data, std::size_t size) {
|
||||
if (fill_ >= buff_.size() || data == nullptr || size == 0) return;
|
||||
auto new_fill = (ipc::detail::min)(fill_ + size, buff_.size());
|
||||
std::memcpy(static_cast<ipc::byte_t*>(buff_.data()) + fill_, data, new_fill - fill_);
|
||||
fill_ = new_fill;
|
||||
}
|
||||
};
|
||||
|
||||
auto cc_acc() {
|
||||
static ipc::shm::handle acc_h("__CA_CONN__", sizeof(acc_t));
|
||||
return static_cast<acc_t*>(acc_h.get());
|
||||
}
|
||||
|
||||
IPC_CONSTEXPR_ std::size_t align_chunk_size(std::size_t size) noexcept {
|
||||
return (((size - 1) / ipc::large_msg_align) + 1) * ipc::large_msg_align;
|
||||
}
|
||||
|
||||
IPC_CONSTEXPR_ std::size_t calc_chunk_size(std::size_t size) noexcept {
|
||||
return ipc::make_align(alignof(std::max_align_t), align_chunk_size(
|
||||
ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic<ipc::circ::cc_t>)) + size));
|
||||
}
|
||||
|
||||
struct chunk_t {
|
||||
std::atomic<ipc::circ::cc_t> &conns() noexcept {
|
||||
return *reinterpret_cast<std::atomic<ipc::circ::cc_t> *>(this);
|
||||
}
|
||||
|
||||
void *data() noexcept {
|
||||
return reinterpret_cast<ipc::byte_t *>(this)
|
||||
+ ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic<ipc::circ::cc_t>));
|
||||
}
|
||||
};
|
||||
|
||||
struct chunk_info_t {
|
||||
ipc::id_pool<> pool_;
|
||||
ipc::spin_lock lock_;
|
||||
|
||||
IPC_CONSTEXPR_ static std::size_t chunks_mem_size(std::size_t chunk_size) noexcept {
|
||||
return ipc::id_pool<>::max_count * chunk_size;
|
||||
}
|
||||
|
||||
ipc::byte_t *chunks_mem() noexcept {
|
||||
return reinterpret_cast<ipc::byte_t *>(this + 1);
|
||||
}
|
||||
|
||||
chunk_t *at(std::size_t chunk_size, ipc::storage_id_t id) noexcept {
|
||||
if (id < 0) return nullptr;
|
||||
return reinterpret_cast<chunk_t *>(chunks_mem() + (chunk_size * id));
|
||||
}
|
||||
};
|
||||
|
||||
auto& chunk_storages() {
|
||||
class chunk_handle_t {
|
||||
ipc::shm::handle handle_;
|
||||
|
||||
public:
|
||||
chunk_info_t *get_info(std::size_t chunk_size) {
|
||||
if (!handle_.valid() &&
|
||||
!handle_.acquire( ("__CHUNK_INFO__" + ipc::to_string(chunk_size)).c_str(),
|
||||
sizeof(chunk_info_t) + chunk_info_t::chunks_mem_size(chunk_size) )) {
|
||||
ipc::error("[chunk_storages] chunk_shm.id_info_.acquire failed: chunk_size = %zd\n", chunk_size);
|
||||
return nullptr;
|
||||
}
|
||||
auto info = static_cast<chunk_info_t*>(handle_.get());
|
||||
if (info == nullptr) {
|
||||
ipc::error("[chunk_storages] chunk_shm.id_info_.get failed: chunk_size = %zd\n", chunk_size);
|
||||
return nullptr;
|
||||
}
|
||||
return info;
|
||||
}
|
||||
};
|
||||
static ipc::map<std::size_t, chunk_handle_t> chunk_hs;
|
||||
return chunk_hs;
|
||||
}
|
||||
|
||||
chunk_info_t *chunk_storage_info(std::size_t chunk_size) {
|
||||
auto &storages = chunk_storages();
|
||||
std::decay_t<decltype(storages)>::iterator it;
|
||||
{
|
||||
static ipc::rw_lock lock;
|
||||
IPC_UNUSED_ std::shared_lock<ipc::rw_lock> guard {lock};
|
||||
if ((it = storages.find(chunk_size)) == storages.end()) {
|
||||
using chunk_handle_t = std::decay_t<decltype(storages)>::value_type::second_type;
|
||||
guard.unlock();
|
||||
IPC_UNUSED_ std::lock_guard<ipc::rw_lock> guard {lock};
|
||||
it = storages.emplace(chunk_size, chunk_handle_t{}).first;
|
||||
}
|
||||
}
|
||||
return it->second.get_info(chunk_size);
|
||||
}
|
||||
|
||||
std::pair<ipc::storage_id_t, void*> acquire_storage(std::size_t size, ipc::circ::cc_t conns) {
|
||||
std::size_t chunk_size = calc_chunk_size(size);
|
||||
auto info = chunk_storage_info(chunk_size);
|
||||
if (info == nullptr) return {};
|
||||
|
||||
info->lock_.lock();
|
||||
info->pool_.prepare();
|
||||
// got an unique id
|
||||
auto id = info->pool_.acquire();
|
||||
info->lock_.unlock();
|
||||
|
||||
auto chunk = info->at(chunk_size, id);
|
||||
if (chunk == nullptr) return {};
|
||||
chunk->conns().store(conns, std::memory_order_relaxed);
|
||||
return { id, chunk->data() };
|
||||
}
|
||||
|
||||
void *find_storage(ipc::storage_id_t id, std::size_t size) {
|
||||
if (id < 0) {
|
||||
ipc::error("[find_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
|
||||
return nullptr;
|
||||
}
|
||||
std::size_t chunk_size = calc_chunk_size(size);
|
||||
auto info = chunk_storage_info(chunk_size);
|
||||
if (info == nullptr) return nullptr;
|
||||
return info->at(chunk_size, id)->data();
|
||||
}
|
||||
|
||||
void release_storage(ipc::storage_id_t id, std::size_t size) {
|
||||
if (id < 0) {
|
||||
ipc::error("[release_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
|
||||
return;
|
||||
}
|
||||
std::size_t chunk_size = calc_chunk_size(size);
|
||||
auto info = chunk_storage_info(chunk_size);
|
||||
if (info == nullptr) return;
|
||||
info->lock_.lock();
|
||||
info->pool_.release(id);
|
||||
info->lock_.unlock();
|
||||
}
|
||||
|
||||
template <ipc::relat Rp, ipc::relat Rc>
|
||||
bool sub_rc(ipc::wr<Rp, Rc, ipc::trans::unicast>,
|
||||
std::atomic<ipc::circ::cc_t> &/*conns*/, ipc::circ::cc_t /*curr_conns*/, ipc::circ::cc_t /*conn_id*/) noexcept {
|
||||
return true;
|
||||
}
|
||||
|
||||
template <ipc::relat Rp, ipc::relat Rc>
|
||||
bool sub_rc(ipc::wr<Rp, Rc, ipc::trans::broadcast>,
|
||||
std::atomic<ipc::circ::cc_t> &conns, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) noexcept {
|
||||
auto last_conns = curr_conns & ~conn_id;
|
||||
for (unsigned k = 0;;) {
|
||||
auto chunk_conns = conns.load(std::memory_order_acquire);
|
||||
if (conns.compare_exchange_weak(chunk_conns, chunk_conns & last_conns, std::memory_order_release)) {
|
||||
return (chunk_conns & last_conns) == 0;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
void recycle_storage(ipc::storage_id_t id, std::size_t size, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) {
|
||||
if (id < 0) {
|
||||
ipc::error("[recycle_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
|
||||
return;
|
||||
}
|
||||
std::size_t chunk_size = calc_chunk_size(size);
|
||||
auto info = chunk_storage_info(chunk_size);
|
||||
if (info == nullptr) return;
|
||||
|
||||
auto chunk = info->at(chunk_size, id);
|
||||
if (chunk == nullptr) return;
|
||||
|
||||
if (!sub_rc(Flag{}, chunk->conns(), curr_conns, conn_id)) {
|
||||
return;
|
||||
}
|
||||
info->lock_.lock();
|
||||
info->pool_.release(id);
|
||||
info->lock_.unlock();
|
||||
}
|
||||
|
||||
template <typename MsgT>
|
||||
bool clear_message(void* p) {
|
||||
auto msg = static_cast<MsgT*>(p);
|
||||
if (msg->storage_) {
|
||||
std::int32_t r_size = static_cast<std::int32_t>(ipc::data_length) + msg->remain_;
|
||||
if (r_size <= 0) {
|
||||
ipc::error("[clear_message] invalid msg size: %d\n", (int)r_size);
|
||||
return true;
|
||||
}
|
||||
release_storage(
|
||||
*reinterpret_cast<ipc::storage_id_t*>(&msg->data_),
|
||||
static_cast<std::size_t>(r_size));
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
struct conn_info_head {
|
||||
|
||||
ipc::string name_;
|
||||
msg_id_t cc_id_; // connection-info id
|
||||
ipc::detail::waiter cc_waiter_, wt_waiter_, rd_waiter_;
|
||||
ipc::shm::handle acc_h_;
|
||||
|
||||
conn_info_head(char const * name)
|
||||
: name_ {name}
|
||||
, cc_id_ {(cc_acc() == nullptr) ? 0 : cc_acc()->fetch_add(1, std::memory_order_relaxed)}
|
||||
, cc_waiter_{("__CC_CONN__" + name_).c_str()}
|
||||
, wt_waiter_{("__WT_CONN__" + name_).c_str()}
|
||||
, rd_waiter_{("__RD_CONN__" + name_).c_str()}
|
||||
, acc_h_ {("__AC_CONN__" + name_).c_str(), sizeof(acc_t)} {
|
||||
}
|
||||
|
||||
void quit_waiting() {
|
||||
cc_waiter_.quit_waiting();
|
||||
wt_waiter_.quit_waiting();
|
||||
rd_waiter_.quit_waiting();
|
||||
}
|
||||
|
||||
auto acc() {
|
||||
return static_cast<acc_t*>(acc_h_.get());
|
||||
}
|
||||
|
||||
auto& recv_cache() {
|
||||
thread_local ipc::unordered_map<msg_id_t, cache_t> tls;
|
||||
return tls;
|
||||
}
|
||||
};
|
||||
|
||||
template <typename W, typename F>
|
||||
bool wait_for(W& waiter, F&& pred, std::uint64_t tm) {
|
||||
if (tm == 0) return !pred();
|
||||
for (unsigned k = 0; pred();) {
|
||||
bool ret = true;
|
||||
ipc::sleep(k, [&k, &ret, &waiter, &pred, tm] {
|
||||
ret = waiter.wait_if(std::forward<F>(pred), tm);
|
||||
k = 0;
|
||||
});
|
||||
if (!ret) return false; // timeout or fail
|
||||
if (k == 0) break; // k has been reset
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename Policy,
|
||||
std::size_t DataSize = ipc::data_length,
|
||||
std::size_t AlignSize = (ipc::detail::min)(DataSize, alignof(std::max_align_t))>
|
||||
struct queue_generator {
|
||||
|
||||
using queue_t = ipc::queue<msg_t<DataSize, AlignSize>, Policy>;
|
||||
|
||||
struct conn_info_t : conn_info_head {
|
||||
queue_t que_;
|
||||
|
||||
conn_info_t(char const * name)
|
||||
: conn_info_head{name}
|
||||
, que_{("__QU_CONN__" +
|
||||
ipc::to_string(DataSize) + "__" +
|
||||
ipc::to_string(AlignSize) + "__" + name).c_str()} {
|
||||
}
|
||||
|
||||
void disconnect_receiver() {
|
||||
bool dis = que_.disconnect();
|
||||
this->quit_waiting();
|
||||
if (dis) {
|
||||
this->recv_cache().clear();
|
||||
}
|
||||
}
|
||||
};
|
||||
};
|
||||
|
||||
template <typename Policy>
|
||||
struct detail_impl {
|
||||
|
||||
using policy_t = Policy;
|
||||
using flag_t = typename policy_t::flag_t;
|
||||
using queue_t = typename queue_generator<policy_t>::queue_t;
|
||||
using conn_info_t = typename queue_generator<policy_t>::conn_info_t;
|
||||
|
||||
constexpr static conn_info_t* info_of(ipc::handle_t h) noexcept {
|
||||
return static_cast<conn_info_t*>(h);
|
||||
}
|
||||
|
||||
constexpr static queue_t* queue_of(ipc::handle_t h) noexcept {
|
||||
return (info_of(h) == nullptr) ? nullptr : &(info_of(h)->que_);
|
||||
}
|
||||
|
||||
/* API implementations */
|
||||
|
||||
static void disconnect(ipc::handle_t h) {
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
return;
|
||||
}
|
||||
que->shut_sending();
|
||||
assert(info_of(h) != nullptr);
|
||||
info_of(h)->disconnect_receiver();
|
||||
}
|
||||
|
||||
static bool reconnect(ipc::handle_t * ph, bool start_to_recv) {
|
||||
assert(ph != nullptr);
|
||||
assert(*ph != nullptr);
|
||||
auto que = queue_of(*ph);
|
||||
if (que == nullptr) {
|
||||
return false;
|
||||
}
|
||||
if (start_to_recv) {
|
||||
que->shut_sending();
|
||||
if (que->connect()) { // wouldn't connect twice
|
||||
info_of(*ph)->cc_waiter_.broadcast();
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
// start_to_recv == false
|
||||
if (que->connected()) {
|
||||
info_of(*ph)->disconnect_receiver();
|
||||
}
|
||||
return que->ready_sending();
|
||||
}
|
||||
|
||||
static bool connect(ipc::handle_t * ph, char const * name, bool start_to_recv) {
|
||||
assert(ph != nullptr);
|
||||
if (*ph == nullptr) {
|
||||
*ph = ipc::mem::alloc<conn_info_t>(name);
|
||||
}
|
||||
return reconnect(ph, start_to_recv);
|
||||
}
|
||||
|
||||
static void destroy(ipc::handle_t h) {
|
||||
disconnect(h);
|
||||
ipc::mem::free(info_of(h));
|
||||
}
|
||||
|
||||
static std::size_t recv_count(ipc::handle_t h) noexcept {
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
return ipc::invalid_value;
|
||||
}
|
||||
return que->conn_count();
|
||||
}
|
||||
|
||||
static bool wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) {
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
return false;
|
||||
}
|
||||
return wait_for(info_of(h)->cc_waiter_, [que, r_count] {
|
||||
return que->conn_count() < r_count;
|
||||
}, tm);
|
||||
}
|
||||
|
||||
template <typename F>
|
||||
static bool send(F&& gen_push, ipc::handle_t h, void const * data, std::size_t size) {
|
||||
if (data == nullptr || size == 0) {
|
||||
ipc::error("fail: send(%p, %zd)\n", data, size);
|
||||
return false;
|
||||
}
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
ipc::error("fail: send, queue_of(h) == nullptr\n");
|
||||
return false;
|
||||
}
|
||||
if (que->elems() == nullptr) {
|
||||
ipc::error("fail: send, queue_of(h)->elems() == nullptr\n");
|
||||
return false;
|
||||
}
|
||||
if (!que->ready_sending()) {
|
||||
ipc::error("fail: send, que->ready_sending() == false\n");
|
||||
return false;
|
||||
}
|
||||
ipc::circ::cc_t conns = que->elems()->connections(std::memory_order_relaxed);
|
||||
if (conns == 0) {
|
||||
ipc::error("fail: send, there is no receiver on this connection.\n");
|
||||
return false;
|
||||
}
|
||||
// calc a new message id
|
||||
auto acc = info_of(h)->acc();
|
||||
if (acc == nullptr) {
|
||||
ipc::error("fail: send, info_of(h)->acc() == nullptr\n");
|
||||
return false;
|
||||
}
|
||||
auto msg_id = acc->fetch_add(1, std::memory_order_relaxed);
|
||||
auto try_push = std::forward<F>(gen_push)(info_of(h), que, msg_id);
|
||||
if (size > ipc::large_msg_limit) {
|
||||
auto dat = acquire_storage(size, conns);
|
||||
void * buf = dat.second;
|
||||
if (buf != nullptr) {
|
||||
std::memcpy(buf, data, size);
|
||||
return try_push(static_cast<std::int32_t>(size) -
|
||||
static_cast<std::int32_t>(ipc::data_length), &(dat.first), 0);
|
||||
}
|
||||
// try using message fragment
|
||||
//ipc::log("fail: shm::handle for big message. msg_id: %zd, size: %zd\n", msg_id, size);
|
||||
}
|
||||
// push message fragment
|
||||
std::int32_t offset = 0;
|
||||
for (std::int32_t i = 0; i < static_cast<std::int32_t>(size / ipc::data_length); ++i, offset += ipc::data_length) {
|
||||
if (!try_push(static_cast<std::int32_t>(size) - offset - static_cast<std::int32_t>(ipc::data_length),
|
||||
static_cast<ipc::byte_t const *>(data) + offset, ipc::data_length)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
// if remain > 0, this is the last message fragment
|
||||
std::int32_t remain = static_cast<std::int32_t>(size) - offset;
|
||||
if (remain > 0) {
|
||||
if (!try_push(remain - static_cast<std::int32_t>(ipc::data_length),
|
||||
static_cast<ipc::byte_t const *>(data) + offset,
|
||||
static_cast<std::size_t>(remain))) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
|
||||
return send([tm](auto info, auto que, auto msg_id) {
|
||||
return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) {
|
||||
if (!wait_for(info->wt_waiter_, [&] {
|
||||
return !que->push(
|
||||
[](void*) { return true; },
|
||||
info->cc_id_, msg_id, remain, data, size);
|
||||
}, tm)) {
|
||||
ipc::log("force_push: msg_id = %zd, remain = %d, size = %zd\n", msg_id, remain, size);
|
||||
if (!que->force_push(
|
||||
clear_message<typename queue_t::value_t>,
|
||||
info->cc_id_, msg_id, remain, data, size)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
info->rd_waiter_.broadcast();
|
||||
return true;
|
||||
};
|
||||
}, h, data, size);
|
||||
}
|
||||
|
||||
static bool try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
|
||||
return send([tm](auto info, auto que, auto msg_id) {
|
||||
return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) {
|
||||
if (!wait_for(info->wt_waiter_, [&] {
|
||||
return !que->push(
|
||||
[](void*) { return true; },
|
||||
info->cc_id_, msg_id, remain, data, size);
|
||||
}, tm)) {
|
||||
return false;
|
||||
}
|
||||
info->rd_waiter_.broadcast();
|
||||
return true;
|
||||
};
|
||||
}, h, data, size);
|
||||
}
|
||||
|
||||
static ipc::buff_t recv(ipc::handle_t h, std::uint64_t tm) {
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
ipc::error("fail: recv, queue_of(h) == nullptr\n");
|
||||
return {};
|
||||
}
|
||||
if (!que->connected()) {
|
||||
// hasn't connected yet, just return.
|
||||
return {};
|
||||
}
|
||||
auto& rc = info_of(h)->recv_cache();
|
||||
for (;;) {
|
||||
// pop a new message
|
||||
typename queue_t::value_t msg;
|
||||
if (!wait_for(info_of(h)->rd_waiter_, [que, &msg] {
|
||||
return !que->pop(msg);
|
||||
}, tm)) {
|
||||
// pop failed, just return.
|
||||
return {};
|
||||
}
|
||||
info_of(h)->wt_waiter_.broadcast();
|
||||
if ((info_of(h)->acc() != nullptr) && (msg.cc_id_ == info_of(h)->cc_id_)) {
|
||||
continue; // ignore message to self
|
||||
}
|
||||
// msg.remain_ may minus & abs(msg.remain_) < data_length
|
||||
std::int32_t r_size = static_cast<std::int32_t>(ipc::data_length) + msg.remain_;
|
||||
if (r_size <= 0) {
|
||||
ipc::error("fail: recv, r_size = %d\n", (int)r_size);
|
||||
return {};
|
||||
}
|
||||
std::size_t msg_size = static_cast<std::size_t>(r_size);
|
||||
// large message
|
||||
if (msg.storage_) {
|
||||
ipc::storage_id_t buf_id = *reinterpret_cast<ipc::storage_id_t*>(&msg.data_);
|
||||
void* buf = find_storage(buf_id, msg_size);
|
||||
if (buf != nullptr) {
|
||||
struct recycle_t {
|
||||
ipc::storage_id_t storage_id;
|
||||
ipc::circ::cc_t curr_conns;
|
||||
ipc::circ::cc_t conn_id;
|
||||
} *r_info = ipc::mem::alloc<recycle_t>(recycle_t{
|
||||
buf_id, que->elems()->connections(std::memory_order_relaxed), que->connected_id()
|
||||
});
|
||||
if (r_info == nullptr) {
|
||||
ipc::log("fail: ipc::mem::alloc<recycle_t>.\n");
|
||||
return ipc::buff_t{buf, msg_size}; // no recycle
|
||||
} else {
|
||||
return ipc::buff_t{buf, msg_size, [](void* p_info, std::size_t size) {
|
||||
auto r_info = static_cast<recycle_t *>(p_info);
|
||||
IPC_UNUSED_ auto finally = ipc::guard([r_info] {
|
||||
ipc::mem::free(r_info);
|
||||
});
|
||||
recycle_storage<flag_t>(r_info->storage_id, size, r_info->curr_conns, r_info->conn_id);
|
||||
}, r_info};
|
||||
}
|
||||
} else {
|
||||
ipc::log("fail: shm::handle for large message. msg_id: %zd, buf_id: %zd, size: %zd\n", msg.id_, buf_id, msg_size);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
// find cache with msg.id_
|
||||
auto cac_it = rc.find(msg.id_);
|
||||
if (cac_it == rc.end()) {
|
||||
if (msg_size <= ipc::data_length) {
|
||||
return make_cache(msg.data_, msg_size);
|
||||
}
|
||||
// gc
|
||||
if (rc.size() > 1024) {
|
||||
std::vector<msg_id_t> need_del;
|
||||
for (auto const & pair : rc) {
|
||||
auto cmp = std::minmax(msg.id_, pair.first);
|
||||
if (cmp.second - cmp.first > 8192) {
|
||||
need_del.push_back(pair.first);
|
||||
}
|
||||
}
|
||||
for (auto id : need_del) rc.erase(id);
|
||||
}
|
||||
// cache the first message fragment
|
||||
rc.emplace(msg.id_, cache_t { ipc::data_length, make_cache(msg.data_, msg_size) });
|
||||
}
|
||||
// has cached before this message
|
||||
else {
|
||||
auto& cac = cac_it->second;
|
||||
// this is the last message fragment
|
||||
if (msg.remain_ <= 0) {
|
||||
cac.append(&(msg.data_), msg_size);
|
||||
// finish this message, erase it from cache
|
||||
auto buff = std::move(cac.buff_);
|
||||
rc.erase(cac_it);
|
||||
return buff;
|
||||
}
|
||||
// there are remain datas after this message
|
||||
cac.append(&(msg.data_), ipc::data_length);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static ipc::buff_t try_recv(ipc::handle_t h) {
|
||||
return recv(h, 0);
|
||||
}
|
||||
|
||||
}; // detail_impl<Policy>
|
||||
|
||||
template <typename Flag>
|
||||
using policy_t = ipc::policy::choose<ipc::circ::elem_array, Flag>;
|
||||
|
||||
} // internal-linkage
|
||||
|
||||
namespace ipc {
|
||||
|
||||
template <typename Flag>
|
||||
ipc::handle_t chan_impl<Flag>::inited() {
|
||||
ipc::detail::waiter::init();
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::connect(ipc::handle_t * ph, char const * name, unsigned mode) {
|
||||
return detail_impl<policy_t<Flag>>::connect(ph, name, mode & receiver);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::reconnect(ipc::handle_t * ph, unsigned mode) {
|
||||
return detail_impl<policy_t<Flag>>::reconnect(ph, mode & receiver);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
void chan_impl<Flag>::disconnect(ipc::handle_t h) {
|
||||
detail_impl<policy_t<Flag>>::disconnect(h);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
void chan_impl<Flag>::destroy(ipc::handle_t h) {
|
||||
detail_impl<policy_t<Flag>>::destroy(h);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
char const * chan_impl<Flag>::name(ipc::handle_t h) {
|
||||
auto info = detail_impl<policy_t<Flag>>::info_of(h);
|
||||
return (info == nullptr) ? nullptr : info->name_.c_str();
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
std::size_t chan_impl<Flag>::recv_count(ipc::handle_t h) {
|
||||
return detail_impl<policy_t<Flag>>::recv_count(h);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) {
|
||||
return detail_impl<policy_t<Flag>>::wait_for_recv(h, r_count, tm);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
|
||||
return detail_impl<policy_t<Flag>>::send(h, data, size, tm);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
buff_t chan_impl<Flag>::recv(ipc::handle_t h, std::uint64_t tm) {
|
||||
return detail_impl<policy_t<Flag>>::recv(h, tm);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
|
||||
return detail_impl<policy_t<Flag>>::try_send(h, data, size, tm);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
buff_t chan_impl<Flag>::try_recv(ipc::handle_t h) {
|
||||
return detail_impl<policy_t<Flag>>::try_recv(h);
|
||||
}
|
||||
|
||||
template struct chan_impl<ipc::wr<relat::single, relat::single, trans::unicast >>;
|
||||
// template struct chan_impl<ipc::wr<relat::single, relat::multi , trans::unicast >>; // TBD
|
||||
// template struct chan_impl<ipc::wr<relat::multi , relat::multi , trans::unicast >>; // TBD
|
||||
template struct chan_impl<ipc::wr<relat::single, relat::multi , trans::broadcast>>;
|
||||
template struct chan_impl<ipc::wr<relat::multi , relat::multi , trans::broadcast>>;
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,25 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <type_traits>
|
||||
|
||||
#include "libipc/def.h"
|
||||
#include "libipc/prod_cons.h"
|
||||
|
||||
#include "libipc/circ/elem_array.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace policy {
|
||||
|
||||
template <template <typename, std::size_t...> class Elems, typename Flag>
|
||||
struct choose;
|
||||
|
||||
template <typename Flag>
|
||||
struct choose<circ::elem_array, Flag> {
|
||||
using flag_t = Flag;
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
using elems_t = circ::elem_array<ipc::prod_cons_impl<flag_t>, DataSize, AlignSize>;
|
||||
};
|
||||
|
||||
} // namespace policy
|
||||
} // namespace ipc
|
||||
@@ -1,17 +0,0 @@
|
||||
#include "libipc/pool_alloc.h"
|
||||
|
||||
#include "libipc/memory/resource.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace mem {
|
||||
|
||||
void* pool_alloc::alloc(std::size_t size) {
|
||||
return async_pool_alloc::alloc(size);
|
||||
}
|
||||
|
||||
void pool_alloc::free(void* p, std::size_t size) {
|
||||
async_pool_alloc::free(p, size);
|
||||
}
|
||||
|
||||
} // namespace mem
|
||||
} // namespace ipc
|
||||
@@ -1,433 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <atomic>
|
||||
#include <utility>
|
||||
#include <cstring>
|
||||
#include <type_traits>
|
||||
#include <cstdint>
|
||||
|
||||
#include "libipc/def.h"
|
||||
|
||||
#include "libipc/platform/detail.h"
|
||||
#include "libipc/circ/elem_def.h"
|
||||
#include "libipc/utility/log.h"
|
||||
#include "libipc/utility/utility.h"
|
||||
|
||||
namespace ipc {
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
/// producer-consumer implementation
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
template <typename Flag>
|
||||
struct prod_cons_impl;
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> rd_; // read index
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
||||
|
||||
constexpr circ::u2_t cursor() const noexcept {
|
||||
return 0;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
||||
auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
|
||||
return false; // full
|
||||
}
|
||||
std::forward<F>(f)(&(elems[cur_wt].data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
|
||||
* So we could just disconnect all connections of receiver, and return false.
|
||||
*/
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(~static_cast<circ::cc_t>(0u));
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
|
||||
auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
|
||||
if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
|
||||
return false; // empty
|
||||
}
|
||||
std::forward<F>(f)(&(elems[cur_rd].data_));
|
||||
std::forward<R>(out)(true);
|
||||
rd_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::multi , trans::unicast>>
|
||||
: prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R,
|
||||
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
||||
byte_t buff[DS];
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
||||
if (circ::index_of(cur_rd) ==
|
||||
circ::index_of(wt_.load(std::memory_order_acquire))) {
|
||||
return false; // empty
|
||||
}
|
||||
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
||||
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
||||
std::forward<F>(f)(buff);
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::multi , relat::multi, trans::unicast>>
|
||||
: prod_cons_impl<wr<relat::single, relat::multi, trans::unicast>> {
|
||||
|
||||
using flag_t = std::uint64_t;
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
||||
circ::u2_t cur_ct, nxt_ct;
|
||||
for (unsigned k = 0;;) {
|
||||
cur_ct = ct_.load(std::memory_order_relaxed);
|
||||
if (circ::index_of(nxt_ct = cur_ct + 1) ==
|
||||
circ::index_of(rd_.load(std::memory_order_acquire))) {
|
||||
return false; // full
|
||||
}
|
||||
if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
auto* el = elems + circ::index_of(cur_ct);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
while (1) {
|
||||
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
||||
if (cur_ct != wt_.load(std::memory_order_relaxed)) {
|
||||
return true;
|
||||
}
|
||||
if ((~cac_ct) != cur_ct) {
|
||||
return true;
|
||||
}
|
||||
if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
|
||||
return true;
|
||||
}
|
||||
wt_.store(nxt_ct, std::memory_order_release);
|
||||
cur_ct = nxt_ct;
|
||||
nxt_ct = cur_ct + 1;
|
||||
el = elems + circ::index_of(cur_ct);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R,
|
||||
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
||||
byte_t buff[DS];
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
||||
auto cur_wt = wt_.load(std::memory_order_acquire);
|
||||
auto id_rd = circ::index_of(cur_rd);
|
||||
auto id_wt = circ::index_of(cur_wt);
|
||||
if (id_rd == id_wt) {
|
||||
auto* el = elems + id_wt;
|
||||
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
||||
if ((~cac_ct) != cur_wt) {
|
||||
return false; // empty
|
||||
}
|
||||
if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
|
||||
wt_.store(cur_wt + 1, std::memory_order_release);
|
||||
}
|
||||
k = 0;
|
||||
}
|
||||
else {
|
||||
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
||||
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
||||
std::forward<F>(f)(buff);
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::multi, trans::broadcast>> {
|
||||
|
||||
using rc_t = std::uint64_t;
|
||||
|
||||
enum : rc_t {
|
||||
ep_mask = 0x00000000ffffffffull,
|
||||
ep_incr = 0x0000000100000000ull
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<rc_t> rc_ { 0 }; // read-counter
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
||||
alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
|
||||
|
||||
circ::u2_t cursor() const noexcept {
|
||||
return wt_.load(std::memory_order_acquire);
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & ep_mask;
|
||||
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
|
||||
return false; // has not finished yet
|
||||
}
|
||||
// consider rem_cc to be 0 here
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
epoch_ += ep_incr;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & ep_mask;
|
||||
if (cc & rem_cc) {
|
||||
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
||||
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
||||
if (cc == 0) return false; // no reader
|
||||
}
|
||||
// just compare & exchange
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E>
|
||||
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
|
||||
if (cur == cursor()) return false; // acquire
|
||||
auto* el = elems + circ::index_of(cur++);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
if ((cur_rc & ep_mask) == 0) {
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
auto nxt_rc = cur_rc & ~static_cast<rc_t>(wrapper->connected_id());
|
||||
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
||||
std::forward<R>(out)((nxt_rc & ep_mask) == 0);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::multi, relat::multi, trans::broadcast>> {
|
||||
|
||||
using rc_t = std::uint64_t;
|
||||
using flag_t = std::uint64_t;
|
||||
|
||||
enum : rc_t {
|
||||
rc_mask = 0x00000000ffffffffull,
|
||||
ep_mask = 0x00ffffffffffffffull,
|
||||
ep_incr = 0x0100000000000000ull,
|
||||
ic_mask = 0xff000000ffffffffull,
|
||||
ic_incr = 0x0000000100000000ull
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<rc_t > rc_ { 0 }; // read-counter
|
||||
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
||||
alignas(cache_line_size) std::atomic<rc_t> epoch_ { 0 };
|
||||
|
||||
circ::u2_t cursor() const noexcept {
|
||||
return ct_.load(std::memory_order_acquire);
|
||||
}
|
||||
|
||||
constexpr static rc_t inc_rc(rc_t rc) noexcept {
|
||||
return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
|
||||
}
|
||||
|
||||
constexpr static rc_t inc_mask(rc_t rc) noexcept {
|
||||
return inc_rc(rc) & ~rc_mask;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
circ::u2_t cur_ct;
|
||||
rc_t epoch = epoch_.load(std::memory_order_acquire);
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_relaxed);
|
||||
circ::cc_t rem_cc = cur_rc & rc_mask;
|
||||
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
|
||||
return false; // has not finished yet
|
||||
}
|
||||
else if (!rem_cc) {
|
||||
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
||||
if ((cur_fl != cur_ct) && cur_fl) {
|
||||
return false; // full
|
||||
}
|
||||
}
|
||||
// consider rem_cc to be 0 here
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed) &&
|
||||
epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
// only one thread/process would touch here at one time
|
||||
ct_.store(cur_ct + 1, std::memory_order_release);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
circ::u2_t cur_ct;
|
||||
rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & rc_mask;
|
||||
if (cc & rem_cc) {
|
||||
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
||||
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
||||
if (cc == 0) return false; // no reader
|
||||
}
|
||||
// just compare & exchange
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed)) {
|
||||
if (epoch == epoch_.load(std::memory_order_acquire)) {
|
||||
break;
|
||||
}
|
||||
else if (push(wrapper, std::forward<F>(f), elems)) {
|
||||
return true;
|
||||
}
|
||||
epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
// only one thread/process would touch here at one time
|
||||
ct_.store(cur_ct + 1, std::memory_order_release);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E, std::size_t N>
|
||||
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
|
||||
auto* el = elems + circ::index_of(cur);
|
||||
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
||||
if (cur_fl != ~static_cast<flag_t>(cur)) {
|
||||
return false; // empty
|
||||
}
|
||||
++cur;
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
if ((cur_rc & rc_mask) == 0) {
|
||||
std::forward<R>(out)(true);
|
||||
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
auto nxt_rc = inc_rc(cur_rc) & ~static_cast<rc_t>(wrapper->connected_id());
|
||||
bool last_one = false;
|
||||
if ((last_one = (nxt_rc & rc_mask) == 0)) {
|
||||
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
||||
}
|
||||
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
||||
std::forward<R>(out)(last_one);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,216 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <type_traits>
|
||||
#include <new>
|
||||
#include <utility> // [[since C++14]]: std::exchange
|
||||
#include <algorithm>
|
||||
#include <atomic>
|
||||
#include <tuple>
|
||||
#include <thread>
|
||||
#include <chrono>
|
||||
#include <string>
|
||||
#include <cassert> // assert
|
||||
|
||||
#include "libipc/def.h"
|
||||
#include "libipc/shm.h"
|
||||
#include "libipc/rw_lock.h"
|
||||
|
||||
#include "libipc/utility/log.h"
|
||||
#include "libipc/platform/detail.h"
|
||||
#include "libipc/circ/elem_def.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace detail {
|
||||
|
||||
class queue_conn {
|
||||
protected:
|
||||
circ::cc_t connected_ = 0;
|
||||
shm::handle elems_h_;
|
||||
|
||||
template <typename Elems>
|
||||
Elems* open(char const * name) {
|
||||
if (name == nullptr || name[0] == '\0') {
|
||||
ipc::error("fail open waiter: name is empty!\n");
|
||||
return nullptr;
|
||||
}
|
||||
if (!elems_h_.acquire(name, sizeof(Elems))) {
|
||||
return nullptr;
|
||||
}
|
||||
auto elems = static_cast<Elems*>(elems_h_.get());
|
||||
if (elems == nullptr) {
|
||||
ipc::error("fail acquire elems: %s\n", name);
|
||||
return nullptr;
|
||||
}
|
||||
elems->init();
|
||||
return elems;
|
||||
}
|
||||
|
||||
void close() {
|
||||
elems_h_.release();
|
||||
}
|
||||
|
||||
public:
|
||||
queue_conn() = default;
|
||||
queue_conn(const queue_conn&) = delete;
|
||||
queue_conn& operator=(const queue_conn&) = delete;
|
||||
|
||||
bool connected() const noexcept {
|
||||
return connected_ != 0;
|
||||
}
|
||||
|
||||
circ::cc_t connected_id() const noexcept {
|
||||
return connected_;
|
||||
}
|
||||
|
||||
template <typename Elems>
|
||||
auto connect(Elems* elems) noexcept
|
||||
/*needs 'optional' here*/
|
||||
-> std::tuple<bool, bool, decltype(std::declval<Elems>().cursor())> {
|
||||
if (elems == nullptr) return {};
|
||||
// if it's already connected, just return
|
||||
if (connected()) return {connected(), false, 0};
|
||||
connected_ = elems->connect_receiver();
|
||||
return {connected(), true, elems->cursor()};
|
||||
}
|
||||
|
||||
template <typename Elems>
|
||||
bool disconnect(Elems* elems) noexcept {
|
||||
if (elems == nullptr) return false;
|
||||
// if it's already disconnected, just return false
|
||||
if (!connected()) return false;
|
||||
elems->disconnect_receiver(std::exchange(connected_, 0));
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
||||
template <typename Elems>
|
||||
class queue_base : public queue_conn {
|
||||
using base_t = queue_conn;
|
||||
|
||||
public:
|
||||
using elems_t = Elems;
|
||||
using policy_t = typename elems_t::policy_t;
|
||||
|
||||
protected:
|
||||
elems_t * elems_ = nullptr;
|
||||
decltype(std::declval<elems_t>().cursor()) cursor_ = 0;
|
||||
bool sender_flag_ = false;
|
||||
|
||||
public:
|
||||
using base_t::base_t;
|
||||
|
||||
queue_base() = default;
|
||||
|
||||
explicit queue_base(char const * name)
|
||||
: queue_base{} {
|
||||
elems_ = open<elems_t>(name);
|
||||
}
|
||||
|
||||
explicit queue_base(elems_t * elems) noexcept
|
||||
: queue_base{} {
|
||||
assert(elems != nullptr);
|
||||
elems_ = elems;
|
||||
}
|
||||
|
||||
/* not virtual */ ~queue_base() {
|
||||
base_t::close();
|
||||
}
|
||||
|
||||
elems_t * elems() noexcept { return elems_; }
|
||||
elems_t const * elems() const noexcept { return elems_; }
|
||||
|
||||
bool ready_sending() noexcept {
|
||||
if (elems_ == nullptr) return false;
|
||||
return sender_flag_ || (sender_flag_ = elems_->connect_sender());
|
||||
}
|
||||
|
||||
void shut_sending() noexcept {
|
||||
if (elems_ == nullptr) return;
|
||||
if (!sender_flag_) return;
|
||||
elems_->disconnect_sender();
|
||||
}
|
||||
|
||||
bool connect() noexcept {
|
||||
auto tp = base_t::connect(elems_);
|
||||
if (std::get<0>(tp) && std::get<1>(tp)) {
|
||||
cursor_ = std::get<2>(tp);
|
||||
return true;
|
||||
}
|
||||
return std::get<0>(tp);
|
||||
}
|
||||
|
||||
bool disconnect() noexcept {
|
||||
return base_t::disconnect(elems_);
|
||||
}
|
||||
|
||||
std::size_t conn_count() const noexcept {
|
||||
return (elems_ == nullptr) ? static_cast<std::size_t>(invalid_value) : elems_->conn_count();
|
||||
}
|
||||
|
||||
bool valid() const noexcept {
|
||||
return elems_ != nullptr;
|
||||
}
|
||||
|
||||
bool empty() const noexcept {
|
||||
return !valid() || (cursor_ == elems_->cursor());
|
||||
}
|
||||
|
||||
template <typename T, typename F, typename... P>
|
||||
bool push(F&& prep, P&&... params) {
|
||||
if (elems_ == nullptr) return false;
|
||||
return elems_->push(this, [&](void* p) {
|
||||
if (prep(p)) ::new (p) T(std::forward<P>(params)...);
|
||||
});
|
||||
}
|
||||
|
||||
template <typename T, typename F, typename... P>
|
||||
bool force_push(F&& prep, P&&... params) {
|
||||
if (elems_ == nullptr) return false;
|
||||
return elems_->force_push(this, [&](void* p) {
|
||||
if (prep(p)) ::new (p) T(std::forward<P>(params)...);
|
||||
});
|
||||
}
|
||||
|
||||
template <typename T, typename F>
|
||||
bool pop(T& item, F&& out) {
|
||||
if (elems_ == nullptr) {
|
||||
return false;
|
||||
}
|
||||
return elems_->pop(this, &(this->cursor_), [&item](void* p) {
|
||||
::new (&item) T(std::move(*static_cast<T*>(p)));
|
||||
}, std::forward<F>(out));
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace detail
|
||||
|
||||
template <typename T, typename Policy>
|
||||
class queue final : public detail::queue_base<typename Policy::template elems_t<sizeof(T), alignof(T)>> {
|
||||
using base_t = detail::queue_base<typename Policy::template elems_t<sizeof(T), alignof(T)>>;
|
||||
|
||||
public:
|
||||
using value_t = T;
|
||||
|
||||
using base_t::base_t;
|
||||
|
||||
template <typename... P>
|
||||
bool push(P&&... params) {
|
||||
return base_t::template push<T>(std::forward<P>(params)...);
|
||||
}
|
||||
|
||||
template <typename... P>
|
||||
bool force_push(P&&... params) {
|
||||
return base_t::template force_push<T>(std::forward<P>(params)...);
|
||||
}
|
||||
|
||||
bool pop(T& item) {
|
||||
return base_t::pop(item, [](bool) {});
|
||||
}
|
||||
|
||||
template <typename F>
|
||||
bool pop(T& item, F&& out) {
|
||||
return base_t::pop(item, std::forward<F>(out));
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,103 +0,0 @@
|
||||
|
||||
#include <string>
|
||||
#include <utility>
|
||||
|
||||
#include "libipc/shm.h"
|
||||
|
||||
#include "libipc/utility/pimpl.h"
|
||||
#include "libipc/memory/resource.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace shm {
|
||||
|
||||
class handle::handle_ : public pimpl<handle_> {
|
||||
public:
|
||||
shm::id_t id_ = nullptr;
|
||||
void* m_ = nullptr;
|
||||
|
||||
ipc::string n_;
|
||||
std::size_t s_ = 0;
|
||||
};
|
||||
|
||||
handle::handle()
|
||||
: p_(p_->make()) {
|
||||
}
|
||||
|
||||
handle::handle(char const * name, std::size_t size, unsigned mode)
|
||||
: handle() {
|
||||
acquire(name, size, mode);
|
||||
}
|
||||
|
||||
handle::handle(handle&& rhs)
|
||||
: handle() {
|
||||
swap(rhs);
|
||||
}
|
||||
|
||||
handle::~handle() {
|
||||
release();
|
||||
p_->clear();
|
||||
}
|
||||
|
||||
void handle::swap(handle& rhs) {
|
||||
std::swap(p_, rhs.p_);
|
||||
}
|
||||
|
||||
handle& handle::operator=(handle rhs) {
|
||||
swap(rhs);
|
||||
return *this;
|
||||
}
|
||||
|
||||
bool handle::valid() const noexcept {
|
||||
return impl(p_)->m_ != nullptr;
|
||||
}
|
||||
|
||||
std::size_t handle::size() const noexcept {
|
||||
return impl(p_)->s_;
|
||||
}
|
||||
|
||||
char const * handle::name() const noexcept {
|
||||
return impl(p_)->n_.c_str();
|
||||
}
|
||||
|
||||
std::int32_t handle::ref() const noexcept {
|
||||
return shm::get_ref(impl(p_)->id_);
|
||||
}
|
||||
|
||||
void handle::sub_ref() noexcept {
|
||||
shm::sub_ref(impl(p_)->id_);
|
||||
}
|
||||
|
||||
bool handle::acquire(char const * name, std::size_t size, unsigned mode) {
|
||||
release();
|
||||
impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode);
|
||||
impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
|
||||
return valid();
|
||||
}
|
||||
|
||||
std::int32_t handle::release() {
|
||||
if (impl(p_)->id_ == nullptr) return -1;
|
||||
return shm::release(detach());
|
||||
}
|
||||
|
||||
void* handle::get() const {
|
||||
return impl(p_)->m_;
|
||||
}
|
||||
|
||||
void handle::attach(id_t id) {
|
||||
if (id == nullptr) return;
|
||||
release();
|
||||
impl(p_)->id_ = id;
|
||||
impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
|
||||
}
|
||||
|
||||
id_t handle::detach() {
|
||||
auto old = impl(p_)->id_;
|
||||
impl(p_)->id_ = nullptr;
|
||||
impl(p_)->m_ = nullptr;
|
||||
impl(p_)->s_ = 0;
|
||||
impl(p_)->n_.clear();
|
||||
return old;
|
||||
}
|
||||
|
||||
} // namespace shm
|
||||
} // namespace ipc
|
||||
@@ -1,83 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <utility>
|
||||
#include <string>
|
||||
#include <mutex>
|
||||
#include <atomic>
|
||||
|
||||
#include "libipc/def.h"
|
||||
#include "libipc/mutex.h"
|
||||
#include "libipc/condition.h"
|
||||
#include "libipc/platform/detail.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace detail {
|
||||
|
||||
class waiter {
|
||||
ipc::sync::condition cond_;
|
||||
ipc::sync::mutex lock_;
|
||||
std::atomic<bool> quit_ {false};
|
||||
|
||||
public:
|
||||
static void init();
|
||||
|
||||
waiter() = default;
|
||||
waiter(char const *name) {
|
||||
open(name);
|
||||
}
|
||||
|
||||
~waiter() {
|
||||
close();
|
||||
}
|
||||
|
||||
bool valid() const noexcept {
|
||||
return cond_.valid() && lock_.valid();
|
||||
}
|
||||
|
||||
bool open(char const *name) noexcept {
|
||||
quit_.store(false, std::memory_order_relaxed);
|
||||
if (!cond_.open((std::string{"_waiter_cond_"} + name).c_str())) {
|
||||
return false;
|
||||
}
|
||||
if (!lock_.open((std::string{"_waiter_lock_"} + name).c_str())) {
|
||||
cond_.close();
|
||||
return false;
|
||||
}
|
||||
return valid();
|
||||
}
|
||||
|
||||
void close() noexcept {
|
||||
cond_.close();
|
||||
lock_.close();
|
||||
}
|
||||
|
||||
template <typename F>
|
||||
bool wait_if(F &&pred, std::uint64_t tm = ipc::invalid_value) noexcept {
|
||||
IPC_UNUSED_ std::lock_guard<ipc::sync::mutex> guard {lock_};
|
||||
while ([this, &pred] {
|
||||
return !quit_.load(std::memory_order_relaxed)
|
||||
&& std::forward<F>(pred)();
|
||||
}()) {
|
||||
if (!cond_.wait(lock_, tm)) return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
bool notify() noexcept {
|
||||
std::lock_guard<ipc::sync::mutex>{lock_}; // barrier
|
||||
return cond_.notify(lock_);
|
||||
}
|
||||
|
||||
bool broadcast() noexcept {
|
||||
std::lock_guard<ipc::sync::mutex>{lock_}; // barrier
|
||||
return cond_.broadcast(lock_);
|
||||
}
|
||||
|
||||
bool quit_waiting() {
|
||||
quit_.store(true, std::memory_order_release);
|
||||
return broadcast();
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace detail
|
||||
} // namespace ipc
|
||||
@@ -1,3 +0,0 @@
|
||||
https://github.com/mutouyun/cpp-ipc
|
||||
|
||||
A high-performance inter-process communication library using shared memory on Linux/Windows.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,316 +0,0 @@
|
||||
// jpgd.h - C++ class for JPEG decompression.
|
||||
// Public domain, Rich Geldreich <richgel99@gmail.com>
|
||||
#ifndef JPEG_DECODER_H
|
||||
#define JPEG_DECODER_H
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
#include <setjmp.h>
|
||||
|
||||
namespace jpgd
|
||||
{
|
||||
typedef unsigned char uint8;
|
||||
typedef signed short int16;
|
||||
typedef unsigned short uint16;
|
||||
typedef unsigned int uint;
|
||||
typedef signed int int32;
|
||||
|
||||
// Loads a JPEG image from a memory buffer or a file.
|
||||
// req_comps can be 1 (grayscale), 3 (RGB), or 4 (RGBA).
|
||||
// On return, width/height will be set to the image's dimensions, and actual_comps will be set to the either 1 (grayscale) or 3 (RGB).
|
||||
// Notes: For more control over where and how the source data is read, see the decompress_jpeg_image_from_stream() function below, or call the jpeg_decoder class directly.
|
||||
// Requesting a 8 or 32bpp image is currently a little faster than 24bpp because the jpeg_decoder class itself currently always unpacks to either 8 or 32bpp.
|
||||
// BEGIN EPIC MOD
|
||||
//unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps);
|
||||
unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format);
|
||||
// END EPIC MOD
|
||||
unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps);
|
||||
|
||||
// Success/failure error codes.
|
||||
enum jpgd_status
|
||||
{
|
||||
JPGD_SUCCESS = 0, JPGD_FAILED = -1, JPGD_DONE = 1,
|
||||
JPGD_BAD_DHT_COUNTS = -256, JPGD_BAD_DHT_INDEX, JPGD_BAD_DHT_MARKER, JPGD_BAD_DQT_MARKER, JPGD_BAD_DQT_TABLE,
|
||||
JPGD_BAD_PRECISION, JPGD_BAD_HEIGHT, JPGD_BAD_WIDTH, JPGD_TOO_MANY_COMPONENTS,
|
||||
JPGD_BAD_SOF_LENGTH, JPGD_BAD_VARIABLE_MARKER, JPGD_BAD_DRI_LENGTH, JPGD_BAD_SOS_LENGTH,
|
||||
JPGD_BAD_SOS_COMP_ID, JPGD_W_EXTRA_BYTES_BEFORE_MARKER, JPGD_NO_ARITHMITIC_SUPPORT, JPGD_UNEXPECTED_MARKER,
|
||||
JPGD_NOT_JPEG, JPGD_UNSUPPORTED_MARKER, JPGD_BAD_DQT_LENGTH, JPGD_TOO_MANY_BLOCKS,
|
||||
JPGD_UNDEFINED_QUANT_TABLE, JPGD_UNDEFINED_HUFF_TABLE, JPGD_NOT_SINGLE_SCAN, JPGD_UNSUPPORTED_COLORSPACE,
|
||||
JPGD_UNSUPPORTED_SAMP_FACTORS, JPGD_DECODE_ERROR, JPGD_BAD_RESTART_MARKER, JPGD_ASSERTION_ERROR,
|
||||
JPGD_BAD_SOS_SPECTRAL, JPGD_BAD_SOS_SUCCESSIVE, JPGD_STREAM_READ, JPGD_NOTENOUGHMEM
|
||||
};
|
||||
|
||||
// Input stream interface.
|
||||
// Derive from this class to read input data from sources other than files or memory. Set m_eof_flag to true when no more data is available.
|
||||
// The decoder is rather greedy: it will keep on calling this method until its internal input buffer is full, or until the EOF flag is set.
|
||||
// It the input stream contains data after the JPEG stream's EOI (end of image) marker it will probably be pulled into the internal buffer.
|
||||
// Call the get_total_bytes_read() method to determine the actual size of the JPEG stream after successful decoding.
|
||||
class jpeg_decoder_stream
|
||||
{
|
||||
public:
|
||||
jpeg_decoder_stream() { }
|
||||
virtual ~jpeg_decoder_stream() { }
|
||||
|
||||
// The read() method is called when the internal input buffer is empty.
|
||||
// Parameters:
|
||||
// pBuf - input buffer
|
||||
// max_bytes_to_read - maximum bytes that can be written to pBuf
|
||||
// pEOF_flag - set this to true if at end of stream (no more bytes remaining)
|
||||
// Returns -1 on error, otherwise return the number of bytes actually written to the buffer (which may be 0).
|
||||
// Notes: This method will be called in a loop until you set *pEOF_flag to true or the internal buffer is full.
|
||||
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) = 0;
|
||||
};
|
||||
|
||||
// stdio FILE stream class.
|
||||
class jpeg_decoder_file_stream : public jpeg_decoder_stream
|
||||
{
|
||||
jpeg_decoder_file_stream(const jpeg_decoder_file_stream &);
|
||||
jpeg_decoder_file_stream &operator =(const jpeg_decoder_file_stream &);
|
||||
|
||||
FILE *m_pFile;
|
||||
bool m_eof_flag, m_error_flag;
|
||||
|
||||
public:
|
||||
jpeg_decoder_file_stream();
|
||||
virtual ~jpeg_decoder_file_stream();
|
||||
|
||||
bool open(const char *Pfilename);
|
||||
void close();
|
||||
|
||||
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag);
|
||||
};
|
||||
|
||||
// Memory stream class.
|
||||
class jpeg_decoder_mem_stream : public jpeg_decoder_stream
|
||||
{
|
||||
const uint8 *m_pSrc_data;
|
||||
uint m_ofs, m_size;
|
||||
|
||||
public:
|
||||
jpeg_decoder_mem_stream() : m_pSrc_data(NULL), m_ofs(0), m_size(0) { }
|
||||
jpeg_decoder_mem_stream(const uint8 *pSrc_data, uint size) : m_pSrc_data(pSrc_data), m_ofs(0), m_size(size) { }
|
||||
|
||||
virtual ~jpeg_decoder_mem_stream() { }
|
||||
|
||||
bool open(const uint8 *pSrc_data, uint size);
|
||||
void close() { m_pSrc_data = NULL; m_ofs = 0; m_size = 0; }
|
||||
|
||||
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag);
|
||||
};
|
||||
|
||||
// Loads JPEG file from a jpeg_decoder_stream.
|
||||
unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps);
|
||||
|
||||
enum
|
||||
{
|
||||
JPGD_IN_BUF_SIZE = 8192, JPGD_MAX_BLOCKS_PER_MCU = 10, JPGD_MAX_HUFF_TABLES = 8, JPGD_MAX_QUANT_TABLES = 4,
|
||||
JPGD_MAX_COMPONENTS = 4, JPGD_MAX_COMPS_IN_SCAN = 4, JPGD_MAX_BLOCKS_PER_ROW = 8192, JPGD_MAX_HEIGHT = 16384, JPGD_MAX_WIDTH = 16384
|
||||
};
|
||||
|
||||
typedef int16 jpgd_quant_t;
|
||||
typedef int16 jpgd_block_t;
|
||||
|
||||
class jpeg_decoder
|
||||
{
|
||||
public:
|
||||
// Call get_error_code() after constructing to determine if the stream is valid or not. You may call the get_width(), get_height(), etc.
|
||||
// methods after the constructor is called. You may then either destruct the object, or begin decoding the image by calling begin_decoding(), then decode() on each scanline.
|
||||
jpeg_decoder(jpeg_decoder_stream *pStream);
|
||||
|
||||
~jpeg_decoder();
|
||||
|
||||
// Call this method after constructing the object to begin decompression.
|
||||
// If JPGD_SUCCESS is returned you may then call decode() on each scanline.
|
||||
int begin_decoding();
|
||||
|
||||
// Returns the next scan line.
|
||||
// For grayscale images, pScan_line will point to a buffer containing 8-bit pixels (get_bytes_per_pixel() will return 1).
|
||||
// Otherwise, it will always point to a buffer containing 32-bit RGBA pixels (A will always be 255, and get_bytes_per_pixel() will return 4).
|
||||
// Returns JPGD_SUCCESS if a scan line has been returned.
|
||||
// Returns JPGD_DONE if all scan lines have been returned.
|
||||
// Returns JPGD_FAILED if an error occurred. Call get_error_code() for a more info.
|
||||
int decode(const void** pScan_line, uint* pScan_line_len);
|
||||
|
||||
inline jpgd_status get_error_code() const { return m_error_code; }
|
||||
|
||||
inline int get_width() const { return m_image_x_size; }
|
||||
inline int get_height() const { return m_image_y_size; }
|
||||
|
||||
inline int get_num_components() const { return m_comps_in_frame; }
|
||||
|
||||
inline int get_bytes_per_pixel() const { return m_dest_bytes_per_pixel; }
|
||||
inline int get_bytes_per_scan_line() const { return m_image_x_size * get_bytes_per_pixel(); }
|
||||
|
||||
// Returns the total number of bytes actually consumed by the decoder (which should equal the actual size of the JPEG file).
|
||||
inline int get_total_bytes_read() const { return m_total_bytes_read; }
|
||||
|
||||
private:
|
||||
jpeg_decoder(const jpeg_decoder &);
|
||||
jpeg_decoder &operator =(const jpeg_decoder &);
|
||||
|
||||
typedef void (*pDecode_block_func)(jpeg_decoder *, int, int, int);
|
||||
|
||||
struct huff_tables
|
||||
{
|
||||
bool ac_table;
|
||||
uint look_up[256];
|
||||
uint look_up2[256];
|
||||
uint8 code_size[256];
|
||||
uint tree[512];
|
||||
};
|
||||
|
||||
struct coeff_buf
|
||||
{
|
||||
uint8 *pData;
|
||||
int block_num_x, block_num_y;
|
||||
int block_len_x, block_len_y;
|
||||
int block_size;
|
||||
};
|
||||
|
||||
struct mem_block
|
||||
{
|
||||
mem_block *m_pNext;
|
||||
size_t m_used_count;
|
||||
size_t m_size;
|
||||
char m_data[1];
|
||||
};
|
||||
|
||||
jmp_buf m_jmp_state;
|
||||
mem_block *m_pMem_blocks;
|
||||
int m_image_x_size;
|
||||
int m_image_y_size;
|
||||
jpeg_decoder_stream *m_pStream;
|
||||
int m_progressive_flag;
|
||||
uint8 m_huff_ac[JPGD_MAX_HUFF_TABLES];
|
||||
uint8* m_huff_num[JPGD_MAX_HUFF_TABLES]; // pointer to number of Huffman codes per bit size
|
||||
uint8* m_huff_val[JPGD_MAX_HUFF_TABLES]; // pointer to Huffman codes per bit size
|
||||
jpgd_quant_t* m_quant[JPGD_MAX_QUANT_TABLES]; // pointer to quantization tables
|
||||
int m_scan_type; // Gray, Yh1v1, Yh1v2, Yh2v1, Yh2v2 (CMYK111, CMYK4114 no longer supported)
|
||||
int m_comps_in_frame; // # of components in frame
|
||||
int m_comp_h_samp[JPGD_MAX_COMPONENTS]; // component's horizontal sampling factor
|
||||
int m_comp_v_samp[JPGD_MAX_COMPONENTS]; // component's vertical sampling factor
|
||||
int m_comp_quant[JPGD_MAX_COMPONENTS]; // component's quantization table selector
|
||||
int m_comp_ident[JPGD_MAX_COMPONENTS]; // component's ID
|
||||
int m_comp_h_blocks[JPGD_MAX_COMPONENTS];
|
||||
int m_comp_v_blocks[JPGD_MAX_COMPONENTS];
|
||||
int m_comps_in_scan; // # of components in scan
|
||||
int m_comp_list[JPGD_MAX_COMPS_IN_SCAN]; // components in this scan
|
||||
int m_comp_dc_tab[JPGD_MAX_COMPONENTS]; // component's DC Huffman coding table selector
|
||||
int m_comp_ac_tab[JPGD_MAX_COMPONENTS]; // component's AC Huffman coding table selector
|
||||
int m_spectral_start; // spectral selection start
|
||||
int m_spectral_end; // spectral selection end
|
||||
int m_successive_low; // successive approximation low
|
||||
int m_successive_high; // successive approximation high
|
||||
int m_max_mcu_x_size; // MCU's max. X size in pixels
|
||||
int m_max_mcu_y_size; // MCU's max. Y size in pixels
|
||||
int m_blocks_per_mcu;
|
||||
int m_max_blocks_per_row;
|
||||
int m_mcus_per_row, m_mcus_per_col;
|
||||
int m_mcu_org[JPGD_MAX_BLOCKS_PER_MCU];
|
||||
int m_total_lines_left; // total # lines left in image
|
||||
int m_mcu_lines_left; // total # lines left in this MCU
|
||||
int m_real_dest_bytes_per_scan_line;
|
||||
int m_dest_bytes_per_scan_line; // rounded up
|
||||
int m_dest_bytes_per_pixel; // 4 (RGB) or 1 (Y)
|
||||
huff_tables* m_pHuff_tabs[JPGD_MAX_HUFF_TABLES];
|
||||
coeff_buf* m_dc_coeffs[JPGD_MAX_COMPONENTS];
|
||||
coeff_buf* m_ac_coeffs[JPGD_MAX_COMPONENTS];
|
||||
int m_eob_run;
|
||||
int m_block_y_mcu[JPGD_MAX_COMPONENTS];
|
||||
uint8* m_pIn_buf_ofs;
|
||||
int m_in_buf_left;
|
||||
int m_tem_flag;
|
||||
bool m_eof_flag;
|
||||
uint8 m_in_buf_pad_start[128];
|
||||
uint8 m_in_buf[JPGD_IN_BUF_SIZE + 128];
|
||||
uint8 m_in_buf_pad_end[128];
|
||||
int m_bits_left;
|
||||
uint m_bit_buf;
|
||||
int m_restart_interval;
|
||||
int m_restarts_left;
|
||||
int m_next_restart_num;
|
||||
int m_max_mcus_per_row;
|
||||
int m_max_blocks_per_mcu;
|
||||
int m_expanded_blocks_per_mcu;
|
||||
int m_expanded_blocks_per_row;
|
||||
int m_expanded_blocks_per_component;
|
||||
bool m_freq_domain_chroma_upsample;
|
||||
int m_max_mcus_per_col;
|
||||
uint m_last_dc_val[JPGD_MAX_COMPONENTS];
|
||||
jpgd_block_t* m_pMCU_coefficients;
|
||||
int m_mcu_block_max_zag[JPGD_MAX_BLOCKS_PER_MCU];
|
||||
uint8* m_pSample_buf;
|
||||
int m_crr[256];
|
||||
int m_cbb[256];
|
||||
int m_crg[256];
|
||||
int m_cbg[256];
|
||||
uint8* m_pScan_line_0;
|
||||
uint8* m_pScan_line_1;
|
||||
jpgd_status m_error_code;
|
||||
bool m_ready_flag;
|
||||
int m_total_bytes_read;
|
||||
|
||||
void free_all_blocks();
|
||||
// BEGIN EPIC MOD
|
||||
UE_NORETURN void stop_decoding(jpgd_status status);
|
||||
// END EPIC MOD
|
||||
void *alloc(size_t n, bool zero = false);
|
||||
void word_clear(void *p, uint16 c, uint n);
|
||||
void prep_in_buffer();
|
||||
void read_dht_marker();
|
||||
void read_dqt_marker();
|
||||
void read_sof_marker();
|
||||
void skip_variable_marker();
|
||||
void read_dri_marker();
|
||||
void read_sos_marker();
|
||||
int next_marker();
|
||||
int process_markers();
|
||||
void locate_soi_marker();
|
||||
void locate_sof_marker();
|
||||
int locate_sos_marker();
|
||||
void init(jpeg_decoder_stream * pStream);
|
||||
void create_look_ups();
|
||||
void fix_in_buffer();
|
||||
void transform_mcu(int mcu_row);
|
||||
void transform_mcu_expand(int mcu_row);
|
||||
coeff_buf* coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y);
|
||||
inline jpgd_block_t *coeff_buf_getp(coeff_buf *cb, int block_x, int block_y);
|
||||
void load_next_row();
|
||||
void decode_next_row();
|
||||
void make_huff_table(int index, huff_tables *pH);
|
||||
void check_quant_tables();
|
||||
void check_huff_tables();
|
||||
void calc_mcu_block_order();
|
||||
int init_scan();
|
||||
void init_frame();
|
||||
void process_restart();
|
||||
void decode_scan(pDecode_block_func decode_block_func);
|
||||
void init_progressive();
|
||||
void init_sequential();
|
||||
void decode_start();
|
||||
void decode_init(jpeg_decoder_stream * pStream);
|
||||
void H2V2Convert();
|
||||
void H2V1Convert();
|
||||
void H1V2Convert();
|
||||
void H1V1Convert();
|
||||
void gray_convert();
|
||||
void expanded_convert();
|
||||
void find_eoi();
|
||||
inline uint get_char();
|
||||
inline uint get_char(bool *pPadding_flag);
|
||||
inline void stuff_char(uint8 q);
|
||||
inline uint8 get_octet();
|
||||
inline uint get_bits(int num_bits);
|
||||
inline uint get_bits_no_markers(int numbits);
|
||||
inline int huff_decode(huff_tables *pH);
|
||||
inline int huff_decode(huff_tables *pH, int& extrabits);
|
||||
static inline uint8 clamp(int i);
|
||||
static void decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y);
|
||||
static void decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y);
|
||||
static void decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y);
|
||||
static void decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y);
|
||||
};
|
||||
|
||||
} // namespace jpgd
|
||||
|
||||
#endif // JPEG_DECODER_H
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,172 +0,0 @@
|
||||
|
||||
// jpge.h - C++ class for JPEG compression.
|
||||
// Public domain, Rich Geldreich <richgel99@gmail.com>
|
||||
// Alex Evans: Added RGBA support, linear memory allocator.
|
||||
#ifndef JPEG_ENCODER_H
|
||||
#define JPEG_ENCODER_H
|
||||
|
||||
#include <stdint.h>
|
||||
|
||||
namespace jpge
|
||||
{
|
||||
typedef unsigned char uint8;
|
||||
typedef signed short int16;
|
||||
typedef signed int int32;
|
||||
typedef unsigned short uint16;
|
||||
typedef unsigned int uint32;
|
||||
typedef unsigned int uint;
|
||||
|
||||
// JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common.
|
||||
enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 };
|
||||
|
||||
// JPEG compression parameters structure.
|
||||
struct params
|
||||
{
|
||||
inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { }
|
||||
|
||||
inline bool check_valid() const
|
||||
{
|
||||
if ((m_quality < 1) || (m_quality > 100)) return false;
|
||||
if ((uint)m_subsampling > (uint)H2V2) return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
// Quality: 1-100, higher is better. Typical values are around 50-95.
|
||||
int m_quality;
|
||||
|
||||
// m_subsampling:
|
||||
// 0 = Y (grayscale) only
|
||||
// 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU)
|
||||
// 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU)
|
||||
// 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common)
|
||||
subsampling_t m_subsampling;
|
||||
|
||||
// Disables CbCr discrimination - only intended for testing.
|
||||
// If true, the Y quantization table is also used for the CbCr channels.
|
||||
bool m_no_chroma_discrim_flag;
|
||||
|
||||
bool m_two_pass_flag;
|
||||
};
|
||||
|
||||
// Writes JPEG image to a file.
|
||||
// num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels.
|
||||
bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
|
||||
|
||||
// Writes JPEG image to memory buffer.
|
||||
// On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes.
|
||||
// If return value is true, buf_size will be set to the size of the compressed data.
|
||||
bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
|
||||
|
||||
// Output stream abstract class - used by the jpeg_encoder class to write to the output stream.
|
||||
// put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts.
|
||||
class output_stream
|
||||
{
|
||||
public:
|
||||
virtual ~output_stream() { };
|
||||
virtual bool put_buf(const void* Pbuf, int64_t len) = 0;
|
||||
template<class T> inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); }
|
||||
};
|
||||
|
||||
// Lower level jpeg_encoder class - useful if more control is needed than the above helper functions.
|
||||
class jpeg_encoder
|
||||
{
|
||||
public:
|
||||
jpeg_encoder();
|
||||
~jpeg_encoder();
|
||||
|
||||
// Initializes the compressor.
|
||||
// pStream: The stream object to use for writing compressed data.
|
||||
// params - Compression parameters structure, defined above.
|
||||
// width, height - Image dimensions.
|
||||
// channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data.
|
||||
// Returns false on out of memory or if a stream write fails.
|
||||
bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params());
|
||||
|
||||
const params &get_params() const { return m_params; }
|
||||
|
||||
// Deinitializes the compressor, freeing any allocated memory. May be called at any time.
|
||||
void deinit();
|
||||
|
||||
uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; }
|
||||
inline uint get_cur_pass() { return m_pass_num; }
|
||||
|
||||
// Call this method with each source scanline.
|
||||
// width * src_channels bytes per scanline is expected (RGB or Y format).
|
||||
// You must call with NULL after all scanlines are processed to finish compression.
|
||||
// Returns false on out of memory or if a stream write fails.
|
||||
bool process_scanline(const void* pScanline);
|
||||
|
||||
private:
|
||||
jpeg_encoder(const jpeg_encoder &);
|
||||
jpeg_encoder &operator =(const jpeg_encoder &);
|
||||
|
||||
typedef int32 sample_array_t;
|
||||
|
||||
output_stream *m_pStream;
|
||||
params m_params;
|
||||
uint8 m_num_components;
|
||||
uint8 m_comp_h_samp[3], m_comp_v_samp[3];
|
||||
int m_image_x, m_image_y, m_image_bpp, m_image_bpl;
|
||||
int m_image_x_mcu, m_image_y_mcu;
|
||||
int m_image_bpl_xlt, m_image_bpl_mcu;
|
||||
int m_mcus_per_row;
|
||||
int m_mcu_x, m_mcu_y;
|
||||
uint8 *m_mcu_lines[16];
|
||||
uint8 m_mcu_y_ofs;
|
||||
sample_array_t m_sample_array[64];
|
||||
int16 m_coefficient_array[64];
|
||||
int32 m_quantization_tables[2][64];
|
||||
uint m_huff_codes[4][256];
|
||||
uint8 m_huff_code_sizes[4][256];
|
||||
uint8 m_huff_bits[4][17];
|
||||
uint8 m_huff_val[4][256];
|
||||
uint32 m_huff_count[4][256];
|
||||
int m_last_dc_val[3];
|
||||
enum { JPGE_OUT_BUF_SIZE = 2048 };
|
||||
uint8 m_out_buf[JPGE_OUT_BUF_SIZE];
|
||||
uint8 *m_pOut_buf;
|
||||
uint m_out_buf_left;
|
||||
uint32 m_bit_buffer;
|
||||
uint m_bits_in;
|
||||
uint8 m_pass_num;
|
||||
bool m_all_stream_writes_succeeded;
|
||||
|
||||
void optimize_huffman_table(int table_num, int table_len);
|
||||
void emit_byte(uint8 i);
|
||||
void emit_word(uint i);
|
||||
void emit_marker(int marker);
|
||||
void emit_jfif_app0();
|
||||
void emit_dqt();
|
||||
void emit_sof();
|
||||
void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag);
|
||||
void emit_dhts();
|
||||
void emit_sos();
|
||||
void emit_markers();
|
||||
void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val);
|
||||
void compute_quant_table(int32 *dst, int16 *src);
|
||||
void adjust_quant_table(int32 *dst, int32 *src);
|
||||
void first_pass_init();
|
||||
bool second_pass_init();
|
||||
bool jpg_open(int p_x_res, int p_y_res, int src_channels);
|
||||
void load_block_8_8_grey(int x);
|
||||
void load_block_8_8(int x, int y, int c);
|
||||
void load_block_16_8(int x, int c);
|
||||
void load_block_16_8_8(int x, int c);
|
||||
void load_quantized_coefficients(int component_num);
|
||||
void flush_output_buffer();
|
||||
void put_bits(uint bits, uint len);
|
||||
void code_coefficients_pass_one(int component_num);
|
||||
void code_coefficients_pass_two(int component_num);
|
||||
void code_block(int component_num);
|
||||
void process_mcu_row();
|
||||
bool terminate_pass_one();
|
||||
bool terminate_pass_two();
|
||||
bool process_end_of_image();
|
||||
void load_mcu(const void* src);
|
||||
void clear();
|
||||
void init();
|
||||
};
|
||||
|
||||
} // namespace jpge
|
||||
|
||||
#endif // JPEG_ENCODER
|
||||
@@ -1,3 +0,0 @@
|
||||
jpge.h - C++ class for JPEG compression.
|
||||
Public domain, Rich Geldreich <richgel99@gmail.com>
|
||||
Alex Evans: Added RGBA support, linear memory allocator.
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,433 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <atomic>
|
||||
#include <utility>
|
||||
#include <cstring>
|
||||
#include <type_traits>
|
||||
#include <cstdint>
|
||||
|
||||
#include "libipc/def.h"
|
||||
|
||||
#include "libipc/platform/detail.h"
|
||||
#include "libipc/circ/elem_def.h"
|
||||
#include "libipc/utility/log.h"
|
||||
#include "libipc/utility/utility.h"
|
||||
|
||||
namespace ipc {
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
/// producer-consumer implementation
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
template <typename Flag>
|
||||
struct prod_cons_impl;
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> rd_; // read index
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
||||
|
||||
constexpr circ::u2_t cursor() const noexcept {
|
||||
return 0;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
||||
auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
|
||||
return false; // full
|
||||
}
|
||||
std::forward<F>(f)(&(elems[cur_wt].data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
|
||||
* So we could just disconnect all connections of receiver, and return false.
|
||||
*/
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(~static_cast<circ::cc_t>(0u));
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
|
||||
auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
|
||||
if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
|
||||
return false; // empty
|
||||
}
|
||||
std::forward<F>(f)(&(elems[cur_rd].data_));
|
||||
std::forward<R>(out)(true);
|
||||
rd_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::multi , trans::unicast>>
|
||||
: prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R,
|
||||
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
||||
byte_t buff[DS];
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
||||
if (circ::index_of(cur_rd) ==
|
||||
circ::index_of(wt_.load(std::memory_order_acquire))) {
|
||||
return false; // empty
|
||||
}
|
||||
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
||||
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
||||
std::forward<F>(f)(buff);
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::multi , relat::multi, trans::unicast>>
|
||||
: prod_cons_impl<wr<relat::single, relat::multi, trans::unicast>> {
|
||||
|
||||
using flag_t = std::uint64_t;
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
||||
circ::u2_t cur_ct, nxt_ct;
|
||||
for (unsigned k = 0;;) {
|
||||
cur_ct = ct_.load(std::memory_order_relaxed);
|
||||
if (circ::index_of(nxt_ct = cur_ct + 1) ==
|
||||
circ::index_of(rd_.load(std::memory_order_acquire))) {
|
||||
return false; // full
|
||||
}
|
||||
if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
auto* el = elems + circ::index_of(cur_ct);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
while (1) {
|
||||
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
||||
if (cur_ct != wt_.load(std::memory_order_relaxed)) {
|
||||
return true;
|
||||
}
|
||||
if ((~cac_ct) != cur_ct) {
|
||||
return true;
|
||||
}
|
||||
if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
|
||||
return true;
|
||||
}
|
||||
wt_.store(nxt_ct, std::memory_order_release);
|
||||
cur_ct = nxt_ct;
|
||||
nxt_ct = cur_ct + 1;
|
||||
el = elems + circ::index_of(cur_ct);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R,
|
||||
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
||||
byte_t buff[DS];
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
||||
auto cur_wt = wt_.load(std::memory_order_acquire);
|
||||
auto id_rd = circ::index_of(cur_rd);
|
||||
auto id_wt = circ::index_of(cur_wt);
|
||||
if (id_rd == id_wt) {
|
||||
auto* el = elems + id_wt;
|
||||
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
||||
if ((~cac_ct) != cur_wt) {
|
||||
return false; // empty
|
||||
}
|
||||
if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
|
||||
wt_.store(cur_wt + 1, std::memory_order_release);
|
||||
}
|
||||
k = 0;
|
||||
}
|
||||
else {
|
||||
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
||||
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
||||
std::forward<F>(f)(buff);
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::multi, trans::broadcast>> {
|
||||
|
||||
using rc_t = std::uint64_t;
|
||||
|
||||
enum : rc_t {
|
||||
ep_mask = 0x00000000ffffffffull,
|
||||
ep_incr = 0x0000000100000000ull
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<rc_t> rc_ { 0 }; // read-counter
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
||||
alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
|
||||
|
||||
circ::u2_t cursor() const noexcept {
|
||||
return wt_.load(std::memory_order_acquire);
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & ep_mask;
|
||||
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
|
||||
return false; // has not finished yet
|
||||
}
|
||||
// consider rem_cc to be 0 here
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
epoch_ += ep_incr;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & ep_mask;
|
||||
if (cc & rem_cc) {
|
||||
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
||||
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
||||
if (cc == 0) return false; // no reader
|
||||
}
|
||||
// just compare & exchange
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E>
|
||||
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
|
||||
if (cur == cursor()) return false; // acquire
|
||||
auto* el = elems + circ::index_of(cur++);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
if ((cur_rc & ep_mask) == 0) {
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
auto nxt_rc = cur_rc & ~static_cast<rc_t>(wrapper->connected_id());
|
||||
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
||||
std::forward<R>(out)((nxt_rc & ep_mask) == 0);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::multi, relat::multi, trans::broadcast>> {
|
||||
|
||||
using rc_t = std::uint64_t;
|
||||
using flag_t = std::uint64_t;
|
||||
|
||||
enum : rc_t {
|
||||
rc_mask = 0x00000000ffffffffull,
|
||||
ep_mask = 0x00ffffffffffffffull,
|
||||
ep_incr = 0x0100000000000000ull,
|
||||
ic_mask = 0xff000000ffffffffull,
|
||||
ic_incr = 0x0000000100000000ull
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<rc_t > rc_ { 0 }; // read-counter
|
||||
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
||||
alignas(cache_line_size) std::atomic<rc_t> epoch_ { 0 };
|
||||
|
||||
circ::u2_t cursor() const noexcept {
|
||||
return ct_.load(std::memory_order_acquire);
|
||||
}
|
||||
|
||||
constexpr static rc_t inc_rc(rc_t rc) noexcept {
|
||||
return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
|
||||
}
|
||||
|
||||
constexpr static rc_t inc_mask(rc_t rc) noexcept {
|
||||
return inc_rc(rc) & ~rc_mask;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
circ::u2_t cur_ct;
|
||||
rc_t epoch = epoch_.load(std::memory_order_acquire);
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_relaxed);
|
||||
circ::cc_t rem_cc = cur_rc & rc_mask;
|
||||
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
|
||||
return false; // has not finished yet
|
||||
}
|
||||
else if (!rem_cc) {
|
||||
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
||||
if ((cur_fl != cur_ct) && cur_fl) {
|
||||
return false; // full
|
||||
}
|
||||
}
|
||||
// consider rem_cc to be 0 here
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed) &&
|
||||
epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
// only one thread/process would touch here at one time
|
||||
ct_.store(cur_ct + 1, std::memory_order_release);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
circ::u2_t cur_ct;
|
||||
rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & rc_mask;
|
||||
if (cc & rem_cc) {
|
||||
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
||||
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
||||
if (cc == 0) return false; // no reader
|
||||
}
|
||||
// just compare & exchange
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed)) {
|
||||
if (epoch == epoch_.load(std::memory_order_acquire)) {
|
||||
break;
|
||||
}
|
||||
else if (push(wrapper, std::forward<F>(f), elems)) {
|
||||
return true;
|
||||
}
|
||||
epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
// only one thread/process would touch here at one time
|
||||
ct_.store(cur_ct + 1, std::memory_order_release);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E, std::size_t N>
|
||||
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
|
||||
auto* el = elems + circ::index_of(cur);
|
||||
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
||||
if (cur_fl != ~static_cast<flag_t>(cur)) {
|
||||
return false; // empty
|
||||
}
|
||||
++cur;
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
if ((cur_rc & rc_mask) == 0) {
|
||||
std::forward<R>(out)(true);
|
||||
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
auto nxt_rc = inc_rc(cur_rc) & ~static_cast<rc_t>(wrapper->connected_id());
|
||||
bool last_one = false;
|
||||
if ((last_one = (nxt_rc & rc_mask) == 0)) {
|
||||
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
||||
}
|
||||
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
||||
std::forward<R>(out)(last_one);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,58 +0,0 @@
|
||||
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU \citep{extendedngpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions \citep{hochreiter2001gradient}. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section~\ref{sec:attention}.
|
||||
|
||||
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations \citep{cheng2016long, decomposableAttnModel, paulus2017deep, lin2017structured}.
|
||||
|
||||
End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks \citep{sukhbaatar2015}.
|
||||
|
||||
To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.
|
||||
In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as \citep{neural_gpu, NalBytenet2017} and \citep{JonasFaceNet2017}.
|
||||
|
||||
|
||||
%\citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
|
||||
|
||||
%For example,! in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at low computation cost, making it an essential ingredient in competitive recurrent models for machine translation.
|
||||
|
||||
%A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
|
||||
|
||||
%After the seminal models introduced in \citep{sutskever14, bahdanau2014neural, cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation (MT) and language modeling with recurrent endoder-decoder and recurrent language models. Recent effort \citep{shazeer2017outrageously} has successfully combined the power of conditional computation with sequence models to train very large models for MT, pushing SOTA at lower computational cost.
|
||||
|
||||
%Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state precludes processing all timesteps at once, instead requiring long sequences of sequential operations. In practice, this results in greatly reduced computational efficiency, as on modern computing hardware, a single operation on a large batch is much faster than a large number of operations on small batches. The problem gets worse at longer sequence lengths. Although sequential computation is not a severe bottleneck at inference time, as autoregressively generating each output requires all previous outputs, the inability to compute scores at all output positions at once hinders us from rapidly training our models over large datasets. Although impressive work such as \citep{Kuchaiev2017Factorization} is able to significantly accelerate the training of LSTMs with factorization tricks, we are still bound by the linear dependence on sequence length.
|
||||
|
||||
%If the model could compute hidden states at each time step using only the inputs and outputs, it would be liberated from the dependence on results from previous time steps during training. This line of thought is the foundation of recent efforts such as the Markovian neural GPU \citep{neural_gpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as a building block to compute hidden representations simultaneously for all timesteps, resulting in $O(1)$ sequential time complexity. \citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
|
||||
|
||||
%A crucial component for accurate sequence prediction is modeling cross-positional communication. For example, in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at a low computation cost, also $O(1)$ sequential time complexity, making it an essential ingredient in recurrent encoder-decoder architectures for MT. A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
|
||||
|
||||
|
||||
|
||||
%Note: Facebook model is no better than RNNs in this regard, since it requires a number of layers proportional to the distance you want to communicate. Bytenet is more promising, since it requires a logarithmnic number of layers (does bytenet have SOTA results)?
|
||||
|
||||
%Note: An attention layer can connect a very large number of positions at a low computation cost in O(1) sequential operations. This is why encoder-decoder attention has been so successful in seq-to-seq models so far. It is only natural, then, to also use attention to connect the timesteps of the same sequence.
|
||||
|
||||
%Note: I wouldn't say that long sequences are not a problem during inference. It would be great if we could infer with no long sequences. We could just say later on that, while our training graph is constant-depth, our model still requires sequential operations in the decoder part during inference due to the autoregressive nature of the model.
|
||||
|
||||
%\begin{table}[h!]
|
||||
%\caption{Attention models are quite efficient for cross-positional communications when sequence length is smaller than channel depth. $n$ represents the sequence length and $d$ represents the channel depth.}
|
||||
%\label{tab:op_complexities}
|
||||
%\begin{center}
|
||||
%\vspace{-5pt}
|
||||
%\scalebox{0.75}{
|
||||
|
||||
%\begin{tabular}{l|c|c|c}
|
||||
%\hline \hline
|
||||
%Layer Type & Receptive & Complexity & Sequential \\
|
||||
% & Field & & Operations \\
|
||||
%\hline
|
||||
%Pointwise Feed-Forward & $1$ & $O(n \cdot d^2)$ & $O(1)$ \\
|
||||
%\hline
|
||||
%Recurrent & $n$ & $O(n \cdot d^2)$ & $O(n)$ \\
|
||||
%\hline
|
||||
%Convolutional & $r$ & $O(r \cdot n \cdot d^2)$ & $O(1)$ \\
|
||||
%\hline
|
||||
%Convolutional (separable) & $r$ & $O(r \cdot n \cdot d + n %\cdot d^2)$ & $O(1)$ \\
|
||||
%\hline
|
||||
%Attention & $r$ & $O(r \cdot n \cdot d)$ & $O(1)$ \\
|
||||
%\hline \hline
|
||||
%\end{tabular}
|
||||
%}
|
||||
%\end{center}
|
||||
%\end{table}
|
||||
@@ -1,18 +0,0 @@
|
||||
Recurrent neural networks, long short-term memory \citep{hochreiter1997} and gated recurrent \citep{gruEval14} neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \citep{sutskever14, bahdanau2014neural, cho2014learning}. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures \citep{wu2016google,luong2015effective,jozefowicz2016exploring}.
|
||||
|
||||
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples.
|
||||
%\marginpar{not sure if the memory constraints are understandable here}
|
||||
Recent work has achieved significant improvements in computational efficiency through factorization tricks \citep{Kuchaiev2017Factorization} and conditional computation \citep{shazeer2017outrageously}, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
|
||||
|
||||
%\marginpar{@all: there is work on analyzing what attention really does in seq2seq models, couldn't find it right away}
|
||||
|
||||
Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences \citep{bahdanau2014neural, structuredAttentionNetworks}. In all but a few cases \citep{decomposableAttnModel}, however, such attention mechanisms are used in conjunction with a recurrent network.
|
||||
|
||||
%\marginpar{not sure if "cross-positional communication" is understandable without explanation}
|
||||
%\marginpar{insert exact training times and stats for the model that reaches sota earliest, maybe even a single GPU model?}
|
||||
|
||||
In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.
|
||||
%\marginpar{you removed the constant number of repetitions part. I wrote it because I wanted to make it clear that the model does not only perform attention once, while it's also not recurrent. I thought that might be important to get across early.}
|
||||
|
||||
% Just a standard paragraph with citations, rewrite.
|
||||
%After the seminal papers of \citep{sutskever14}, \citep{bahdanau2014neural}, and \citep{cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation and language modeling with recurrent sequence models. Recent effort \citep{shazeer2017outrageously} has combined the power of conditional computation with sequence models to train very large models for machine translation, pushing SOTA at lower computational cost. Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state encumbers recurrnet models to process multiple inputs at once, and their time complexity is a linear function of the length of the input and output, both during training and inference. [What I want to say here is that although this is fine during decoding, at training time, we are given both input and output and this linear nature does not allow the RNN to process all inputs and outputs simultaneously and haven't been used on datasets that are the of the scale of the web. What's the largest dataset we have ? . Talk about Nividia and possibly other's effors to speed up things, and possibly other efforts that alleviate this, but are still limited by it's comptuational nature]. Rest of the intro: What if you could construct the state based on the actual inputs and outputs, then you could construct them all at once. This has been the foundation of many promising recent efforts, bytenet,facenet (Also talk about quasi rnn here). Now we talk about attention!! Along with cell architectures such as long short-term meory (LSTM) \citep{hochreiter1997}, and gated recurrent units (GRUs) \citep{cho2014learning}, attention has emerged as an essential ingredient in successful sequence models, in particular for machine translation. In recent years, many, if not all, state-of-the-art (SOTA) results in machine translation have been achieved with attention-based sequence models \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. Talk about the neon work on how it played with attention to do self attention! Then talk about what we do.
|
||||
@@ -1,155 +0,0 @@
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
\includegraphics[scale=0.6]{Figures/ModalNet-21}
|
||||
\caption{The Transformer - model architecture.}
|
||||
\label{fig:model-arch}
|
||||
\end{figure}
|
||||
|
||||
% Although the primary workhorse of our model is attention,
|
||||
%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail.
|
||||
|
||||
Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next.
|
||||
|
||||
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively.
|
||||
|
||||
\subsection{Encoder and Decoder Stacks}
|
||||
|
||||
\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$.
|
||||
|
||||
\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$.
|
||||
|
||||
% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail.
|
||||
|
||||
\subsection{Attention} \label{sec:attention}
|
||||
An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
|
||||
|
||||
\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod}
|
||||
|
||||
% \begin{figure}
|
||||
% \centering
|
||||
% \includegraphics[scale=0.6]{Figures/ModalNet-19}
|
||||
% \caption{Scaled Dot-Product Attention.}
|
||||
% \label{fig:multi-head-att}
|
||||
% \end{figure}
|
||||
|
||||
We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values.
|
||||
|
||||
In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as:
|
||||
|
||||
\begin{equation}
|
||||
\mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V
|
||||
\end{equation}
|
||||
|
||||
The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.
|
||||
|
||||
%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients.
|
||||
|
||||
% Already described in the subsequent section
|
||||
%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$.
|
||||
|
||||
%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model.
|
||||
|
||||
While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$.
|
||||
|
||||
|
||||
%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$.
|
||||
|
||||
|
||||
\subsubsection{Multi-Head Attention} \label{sec:multihead}
|
||||
|
||||
\begin{figure}
|
||||
\begin{minipage}[t]{0.5\textwidth}
|
||||
\centering
|
||||
Scaled Dot-Product Attention \\
|
||||
\vspace{0.5cm}
|
||||
\includegraphics[scale=0.6]{Figures/ModalNet-19}
|
||||
\end{minipage}
|
||||
\begin{minipage}[t]{0.5\textwidth}
|
||||
\centering
|
||||
Multi-Head Attention \\
|
||||
\vspace{0.1cm}
|
||||
\includegraphics[scale=0.6]{Figures/ModalNet-20}
|
||||
\end{minipage}
|
||||
|
||||
|
||||
% \centering
|
||||
|
||||
\caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.}
|
||||
\label{fig:multi-head-att}
|
||||
\end{figure}
|
||||
|
||||
Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively.
|
||||
On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}.
|
||||
|
||||
Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.
|
||||
|
||||
\begin{align*}
|
||||
\mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\
|
||||
% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\
|
||||
\text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\
|
||||
\end{align*}
|
||||
|
||||
Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$.
|
||||
|
||||
|
||||
%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation.
|
||||
|
||||
In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$.
|
||||
Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.
|
||||
|
||||
\subsubsection{Applications of Attention in our Model}
|
||||
|
||||
The Transformer uses multi-head attention in three different ways:
|
||||
\begin{itemize}
|
||||
\item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}.
|
||||
|
||||
\item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.
|
||||
|
||||
\item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}.
|
||||
|
||||
\end{itemize}
|
||||
|
||||
\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn}
|
||||
|
||||
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.
|
||||
|
||||
\begin{equation}
|
||||
\mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2
|
||||
\end{equation}
|
||||
|
||||
While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$.
|
||||
|
||||
|
||||
|
||||
%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention.
|
||||
|
||||
%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention.
|
||||
|
||||
|
||||
%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as
|
||||
%\begin{equation*} \label{eq:attention}
|
||||
% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq).
|
||||
%\end{equation*}
|
||||
%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$.
|
||||
|
||||
%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$.
|
||||
%\marginpar{}
|
||||
|
||||
\subsection{Embeddings and Softmax}
|
||||
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$.
|
||||
|
||||
|
||||
\subsection{Positional Encoding}
|
||||
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}.
|
||||
|
||||
In this work, we use sine and cosine functions of different frequencies:
|
||||
|
||||
\begin{align*}
|
||||
PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\
|
||||
PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel})
|
||||
\end{align*}
|
||||
|
||||
where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$.
|
||||
|
||||
We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.
|
||||
@@ -1,45 +0,0 @@
|
||||
\pagebreak
|
||||
\section*{Two Feed-Forward Layers = Attention over Parameters}\label{sec:parameter_attention}
|
||||
|
||||
In addition to attention layers, our model contains position-wise feed-forward networks (Section \ref{sec:ffn}), which consist of two linear transformations with a ReLU activation in between. In fact, these networks too can be seen as a form of attention. Compare the formula for such a network with the formula for a simple dot-product attention layer (biases and scaling factors omitted):
|
||||
|
||||
\begin{align*}
|
||||
FFN(x, W_1, W_2) = ReLU(xW_1)W_2 \\
|
||||
A(q, K, V) = Softmax(qK^T)V
|
||||
\end{align*}
|
||||
|
||||
Based on the similarity of these formulae, the two-layer feed-forward network can be seen as a kind of attention, where the keys and values are the rows of the trainable parameter matrices $W_1$ and $W_2$, and where we use ReLU instead of Softmax in the compatibility function.
|
||||
|
||||
%the compatablity function is $compat(q, k_i) = ReLU(q \cdot k_i)$ instead of $Softmax(qK_T)_i$.
|
||||
|
||||
Given this similarity, we experimented with replacing the position-wise feed-forward networks with attention layers similar to the ones we use everywhere else our model. The multi-head-attention-over-parameters sublayer is identical to the multi-head attention described in \ref{sec:multihead}, except that the "keys" and "values" inputs to each attention head are trainable model parameters, as opposed to being linear projections of a previous layer. These parameters are scaled up by a factor of $\sqrt{d_{model}}$ in order to be more similar to activations.
|
||||
|
||||
In our first experiment, we replaced each position-wise feed-forward network with a multi-head-attention-over-parameters sublayer with $h_p=8$ heads, key-dimensionality $d_{pk}=64$, and value-dimensionality $d_{pv}=64$, using $n_p=1536$ key-value pairs for each attention head. The sublayer has a total of $2097152$ parameters, including the parameters in the query projection and the output projection. This matches the number of parameters in the position-wise feed-forward network that we replaced. While the theoretical amount of computation is also the same, in practice, the attention version caused the step times to be about 30\% longer.
|
||||
|
||||
In our second experiment, we used $h_p=8$ heads, and $n_p=512$ key-value pairs for each attention head, again matching the total number of parameters in the base model.
|
||||
|
||||
Results for the first experiment were slightly worse than for the base model, and results for the second experiment were slightly better, see Table~\ref{tab:parameter_attention}.
|
||||
|
||||
\begin{table}[h]
|
||||
\caption{Replacing the position-wise feed-forward networks with multihead-attention-over-parameters produces similar results to the base model. All metrics are on the English-to-German translation development set, newstest2013.}
|
||||
\label{tab:parameter_attention}
|
||||
\begin{center}
|
||||
\vspace{-2mm}
|
||||
%\scalebox{1.0}{
|
||||
\begin{tabular}{c|cccccc|cccc}
|
||||
\hline\rule{0pt}{2.0ex}
|
||||
& \multirow{2}{*}{$\dmodel$} & \multirow{2}{*}{$\dff$} &
|
||||
\multirow{2}{*}{$h_p$} & \multirow{2}{*}{$d_{pk}$} & \multirow{2}{*}{$d_{pv}$} &
|
||||
\multirow{2}{*}{$n_p$} &
|
||||
PPL & BLEU & params & training\\
|
||||
& & & & & & & (dev) & (dev) & $\times10^6$ & time \\
|
||||
\hline\rule{0pt}{2.0ex}
|
||||
base & 512 & 2048 & & & & & 4.92 & 25.8 & 65 & 12 hours\\
|
||||
\hline\rule{0pt}{2.0ex}
|
||||
AOP$_1$ & 512 & & 8 & 64 & 64 & 1536 & 4.92& 25.5 & 65 & 16 hours\\
|
||||
AOP$_2$ & 512 & & 16 & 64 & 64 & 512 & \textbf{4.86} & \textbf{25.9} & 65 & 16 hours \\
|
||||
\hline
|
||||
\end{tabular}
|
||||
%}
|
||||
\end{center}
|
||||
\end{table}
|
||||
@@ -1,8 +0,0 @@
|
||||
chatgpt的老祖宗《Attention is all you need》
|
||||
|
||||
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
|
||||
|
||||
真实的摘要如下
|
||||
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
|
||||
|
||||
https://arxiv.org/abs/1706.03762
|
||||
@@ -1,2 +0,0 @@
|
||||
from stable_baselines3.dqn.dqn import DQN
|
||||
from stable_baselines3.dqn.policies import CnnPolicy, MlpPolicy
|
||||
@@ -1,245 +0,0 @@
|
||||
from typing import Any, Dict, List, Optional, Tuple, Type, Union
|
||||
|
||||
import gym
|
||||
import numpy as np
|
||||
import torch as th
|
||||
from torch.nn import functional as F
|
||||
|
||||
from stable_baselines3.common import logger
|
||||
from stable_baselines3.common.off_policy_algorithm import OffPolicyAlgorithm
|
||||
from stable_baselines3.common.preprocessing import maybe_transpose
|
||||
from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
|
||||
from stable_baselines3.common.utils import get_linear_fn, is_vectorized_observation, polyak_update
|
||||
from stable_baselines3.dqn.policies import DQNPolicy
|
||||
|
||||
|
||||
class DQN(OffPolicyAlgorithm):
|
||||
"""
|
||||
Deep Q-Network (DQN)
|
||||
|
||||
Paper: https://arxiv.org/abs/1312.5602, https://www.nature.com/articles/nature14236
|
||||
Default hyperparameters are taken from the nature paper,
|
||||
except for the optimizer and learning rate that were taken from Stable Baselines defaults.
|
||||
|
||||
:param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)
|
||||
:param env: The environment to learn from (if registered in Gym, can be str)
|
||||
:param learning_rate: The learning rate, it can be a function
|
||||
of the current progress remaining (from 1 to 0)
|
||||
:param buffer_size: size of the replay buffer
|
||||
:param learning_starts: how many steps of the model to collect transitions for before learning starts
|
||||
:param batch_size: Minibatch size for each gradient update
|
||||
:param tau: the soft update coefficient ("Polyak update", between 0 and 1) default 1 for hard update
|
||||
:param gamma: the discount factor
|
||||
:param train_freq: Update the model every ``train_freq`` steps. Alternatively pass a tuple of frequency and unit
|
||||
like ``(5, "step")`` or ``(2, "episode")``.
|
||||
:param gradient_steps: How many gradient steps to do after each rollout (see ``train_freq``)
|
||||
Set to ``-1`` means to do as many gradient steps as steps done in the environment
|
||||
during the rollout.
|
||||
:param optimize_memory_usage: Enable a memory efficient variant of the replay buffer
|
||||
at a cost of more complexity.
|
||||
See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195
|
||||
:param target_update_interval: update the target network every ``target_update_interval``
|
||||
environment steps.
|
||||
:param exploration_fraction: fraction of entire training period over which the exploration rate is reduced
|
||||
:param exploration_initial_eps: initial value of random action probability
|
||||
:param exploration_final_eps: final value of random action probability
|
||||
:param max_grad_norm: The maximum value for the gradient clipping
|
||||
:param tensorboard_log: the log location for tensorboard (if None, no logging)
|
||||
:param create_eval_env: Whether to create a second environment that will be
|
||||
used for evaluating the agent periodically. (Only available when passing string for the environment)
|
||||
:param policy_kwargs: additional arguments to be passed to the policy on creation
|
||||
:param verbose: the verbosity level: 0 no output, 1 info, 2 debug
|
||||
:param seed: Seed for the pseudo random generators
|
||||
:param device: Device (cpu, cuda, ...) on which the code should be run.
|
||||
Setting it to auto, the code will be run on the GPU if possible.
|
||||
:param _init_setup_model: Whether or not to build the network at the creation of the instance
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
policy: Union[str, Type[DQNPolicy]],
|
||||
env: Union[GymEnv, str],
|
||||
learning_rate: Union[float, Schedule] = 1e-4,
|
||||
buffer_size: int = 1000000,
|
||||
learning_starts: int = 50000,
|
||||
batch_size: Optional[int] = 32,
|
||||
tau: float = 1.0,
|
||||
gamma: float = 0.99,
|
||||
train_freq: Union[int, Tuple[int, str]] = 4,
|
||||
gradient_steps: int = 1,
|
||||
optimize_memory_usage: bool = False,
|
||||
target_update_interval: int = 10000,
|
||||
exploration_fraction: float = 0.1,
|
||||
exploration_initial_eps: float = 1.0,
|
||||
exploration_final_eps: float = 0.05,
|
||||
max_grad_norm: float = 10,
|
||||
tensorboard_log: Optional[str] = None,
|
||||
create_eval_env: bool = False,
|
||||
policy_kwargs: Optional[Dict[str, Any]] = None,
|
||||
verbose: int = 0,
|
||||
seed: Optional[int] = None,
|
||||
device: Union[th.device, str] = "auto",
|
||||
_init_setup_model: bool = True,
|
||||
):
|
||||
|
||||
super(DQN, self).__init__(
|
||||
policy,
|
||||
env,
|
||||
DQNPolicy,
|
||||
learning_rate,
|
||||
buffer_size,
|
||||
learning_starts,
|
||||
batch_size,
|
||||
tau,
|
||||
gamma,
|
||||
train_freq,
|
||||
gradient_steps,
|
||||
action_noise=None, # No action noise
|
||||
policy_kwargs=policy_kwargs,
|
||||
tensorboard_log=tensorboard_log,
|
||||
verbose=verbose,
|
||||
device=device,
|
||||
create_eval_env=create_eval_env,
|
||||
seed=seed,
|
||||
sde_support=False,
|
||||
optimize_memory_usage=optimize_memory_usage,
|
||||
supported_action_spaces=(gym.spaces.Discrete,),
|
||||
)
|
||||
|
||||
self.exploration_initial_eps = exploration_initial_eps
|
||||
self.exploration_final_eps = exploration_final_eps
|
||||
self.exploration_fraction = exploration_fraction
|
||||
self.target_update_interval = target_update_interval
|
||||
self.max_grad_norm = max_grad_norm
|
||||
# "epsilon" for the epsilon-greedy exploration
|
||||
self.exploration_rate = 0.0
|
||||
# Linear schedule will be defined in `_setup_model()`
|
||||
self.exploration_schedule = None
|
||||
self.q_net, self.q_net_target = None, None
|
||||
|
||||
if _init_setup_model:
|
||||
self._setup_model()
|
||||
|
||||
def _setup_model(self) -> None:
|
||||
super(DQN, self)._setup_model()
|
||||
self._create_aliases()
|
||||
self.exploration_schedule = get_linear_fn(
|
||||
self.exploration_initial_eps, self.exploration_final_eps, self.exploration_fraction
|
||||
)
|
||||
|
||||
def _create_aliases(self) -> None:
|
||||
self.q_net = self.policy.q_net
|
||||
self.q_net_target = self.policy.q_net_target
|
||||
|
||||
def _on_step(self) -> None:
|
||||
"""
|
||||
Update the exploration rate and target network if needed.
|
||||
This method is called in ``collect_rollouts()`` after each step in the environment.
|
||||
"""
|
||||
if self.num_timesteps % self.target_update_interval == 0:
|
||||
polyak_update(self.q_net.parameters(), self.q_net_target.parameters(), self.tau)
|
||||
|
||||
self.exploration_rate = self.exploration_schedule(self._current_progress_remaining)
|
||||
logger.record("rollout/exploration rate", self.exploration_rate)
|
||||
|
||||
def train(self, gradient_steps: int, batch_size: int = 100) -> None:
|
||||
# Update learning rate according to schedule
|
||||
self._update_learning_rate(self.policy.optimizer)
|
||||
|
||||
losses = []
|
||||
for _ in range(gradient_steps):
|
||||
# Sample replay buffer
|
||||
replay_data = self.replay_buffer.sample(batch_size, env=self._vec_normalize_env)
|
||||
|
||||
with th.no_grad():
|
||||
# Compute the next Q-values using the target network
|
||||
next_q_values = self.q_net_target(replay_data.next_observations)
|
||||
# Follow greedy policy: use the one with the highest value
|
||||
next_q_values, _ = next_q_values.max(dim=1)
|
||||
# Avoid potential broadcast issue
|
||||
next_q_values = next_q_values.reshape(-1, 1)
|
||||
# 1-step TD target
|
||||
target_q_values = replay_data.rewards + (1 - replay_data.dones) * self.gamma * next_q_values
|
||||
|
||||
# Get current Q-values estimates
|
||||
current_q_values = self.q_net(replay_data.observations)
|
||||
|
||||
# Retrieve the q-values for the actions from the replay buffer
|
||||
current_q_values = th.gather(current_q_values, dim=1, index=replay_data.actions.long())
|
||||
|
||||
# Compute Huber loss (less sensitive to outliers)
|
||||
loss = F.smooth_l1_loss(current_q_values, target_q_values)
|
||||
losses.append(loss.item())
|
||||
|
||||
# Optimize the policy
|
||||
self.policy.optimizer.zero_grad()
|
||||
loss.backward()
|
||||
# Clip gradient norm
|
||||
th.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm)
|
||||
self.policy.optimizer.step()
|
||||
|
||||
# Increase update counter
|
||||
self._n_updates += gradient_steps
|
||||
|
||||
logger.record("train/n_updates", self._n_updates, exclude="tensorboard")
|
||||
logger.record("train/loss", np.mean(losses))
|
||||
|
||||
def predict(
|
||||
self,
|
||||
observation: np.ndarray,
|
||||
state: Optional[np.ndarray] = None,
|
||||
mask: Optional[np.ndarray] = None,
|
||||
deterministic: bool = False,
|
||||
) -> Tuple[np.ndarray, Optional[np.ndarray]]:
|
||||
"""
|
||||
Overrides the base_class predict function to include epsilon-greedy exploration.
|
||||
|
||||
:param observation: the input observation
|
||||
:param state: The last states (can be None, used in recurrent policies)
|
||||
:param mask: The last masks (can be None, used in recurrent policies)
|
||||
:param deterministic: Whether or not to return deterministic actions.
|
||||
:return: the model's action and the next state
|
||||
(used in recurrent policies)
|
||||
"""
|
||||
if not deterministic and np.random.rand() < self.exploration_rate:
|
||||
if is_vectorized_observation(maybe_transpose(observation, self.observation_space), self.observation_space):
|
||||
n_batch = observation.shape[0]
|
||||
action = np.array([self.action_space.sample() for _ in range(n_batch)])
|
||||
else:
|
||||
action = np.array(self.action_space.sample())
|
||||
else:
|
||||
action, state = self.policy.predict(observation, state, mask, deterministic)
|
||||
return action, state
|
||||
|
||||
def learn(
|
||||
self,
|
||||
total_timesteps: int,
|
||||
callback: MaybeCallback = None,
|
||||
log_interval: int = 4,
|
||||
eval_env: Optional[GymEnv] = None,
|
||||
eval_freq: int = -1,
|
||||
n_eval_episodes: int = 5,
|
||||
tb_log_name: str = "DQN",
|
||||
eval_log_path: Optional[str] = None,
|
||||
reset_num_timesteps: bool = True,
|
||||
) -> OffPolicyAlgorithm:
|
||||
|
||||
return super(DQN, self).learn(
|
||||
total_timesteps=total_timesteps,
|
||||
callback=callback,
|
||||
log_interval=log_interval,
|
||||
eval_env=eval_env,
|
||||
eval_freq=eval_freq,
|
||||
n_eval_episodes=n_eval_episodes,
|
||||
tb_log_name=tb_log_name,
|
||||
eval_log_path=eval_log_path,
|
||||
reset_num_timesteps=reset_num_timesteps,
|
||||
)
|
||||
|
||||
def _excluded_save_params(self) -> List[str]:
|
||||
return super(DQN, self)._excluded_save_params() + ["q_net", "q_net_target"]
|
||||
|
||||
def _get_torch_save_params(self) -> Tuple[List[str], List[str]]:
|
||||
state_dicts = ["policy", "policy.optimizer"]
|
||||
|
||||
return state_dicts, []
|
||||
@@ -1,237 +0,0 @@
|
||||
from typing import Any, Dict, List, Optional, Type
|
||||
|
||||
import gym
|
||||
import torch as th
|
||||
from torch import nn
|
||||
|
||||
from stable_baselines3.common.policies import BasePolicy, register_policy
|
||||
from stable_baselines3.common.torch_layers import BaseFeaturesExtractor, FlattenExtractor, NatureCNN, create_mlp
|
||||
from stable_baselines3.common.type_aliases import Schedule
|
||||
|
||||
|
||||
class QNetwork(BasePolicy):
|
||||
"""
|
||||
Action-Value (Q-Value) network for DQN
|
||||
|
||||
:param observation_space: Observation space
|
||||
:param action_space: Action space
|
||||
:param net_arch: The specification of the policy and value networks.
|
||||
:param activation_fn: Activation function
|
||||
:param normalize_images: Whether to normalize images or not,
|
||||
dividing by 255.0 (True by default)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
observation_space: gym.spaces.Space,
|
||||
action_space: gym.spaces.Space,
|
||||
features_extractor: nn.Module,
|
||||
features_dim: int,
|
||||
net_arch: Optional[List[int]] = None,
|
||||
activation_fn: Type[nn.Module] = nn.ReLU,
|
||||
normalize_images: bool = True,
|
||||
):
|
||||
super(QNetwork, self).__init__(
|
||||
observation_space,
|
||||
action_space,
|
||||
features_extractor=features_extractor,
|
||||
normalize_images=normalize_images,
|
||||
)
|
||||
|
||||
if net_arch is None:
|
||||
net_arch = [64, 64]
|
||||
|
||||
self.net_arch = net_arch
|
||||
self.activation_fn = activation_fn
|
||||
self.features_extractor = features_extractor
|
||||
self.features_dim = features_dim
|
||||
self.normalize_images = normalize_images
|
||||
action_dim = self.action_space.n # number of actions
|
||||
q_net = create_mlp(self.features_dim, action_dim, self.net_arch, self.activation_fn)
|
||||
self.q_net = nn.Sequential(*q_net)
|
||||
|
||||
def forward(self, obs: th.Tensor) -> th.Tensor:
|
||||
"""
|
||||
Predict the q-values.
|
||||
|
||||
:param obs: Observation
|
||||
:return: The estimated Q-Value for each action.
|
||||
"""
|
||||
return self.q_net(self.extract_features(obs))
|
||||
|
||||
def _predict(self, observation: th.Tensor, deterministic: bool = True) -> th.Tensor:
|
||||
q_values = self.forward(observation)
|
||||
# Greedy action
|
||||
action = q_values.argmax(dim=1).reshape(-1)
|
||||
return action
|
||||
|
||||
def _get_constructor_parameters(self) -> Dict[str, Any]:
|
||||
data = super()._get_constructor_parameters()
|
||||
|
||||
data.update(
|
||||
dict(
|
||||
net_arch=self.net_arch,
|
||||
features_dim=self.features_dim,
|
||||
activation_fn=self.activation_fn,
|
||||
features_extractor=self.features_extractor,
|
||||
)
|
||||
)
|
||||
return data
|
||||
|
||||
|
||||
class DQNPolicy(BasePolicy):
|
||||
"""
|
||||
Policy class with Q-Value Net and target net for DQN
|
||||
|
||||
:param observation_space: Observation space
|
||||
:param action_space: Action space
|
||||
:param lr_schedule: Learning rate schedule (could be constant)
|
||||
:param net_arch: The specification of the policy and value networks.
|
||||
:param activation_fn: Activation function
|
||||
:param features_extractor_class: Features extractor to use.
|
||||
:param features_extractor_kwargs: Keyword arguments
|
||||
to pass to the features extractor.
|
||||
:param normalize_images: Whether to normalize images or not,
|
||||
dividing by 255.0 (True by default)
|
||||
:param optimizer_class: The optimizer to use,
|
||||
``th.optim.Adam`` by default
|
||||
:param optimizer_kwargs: Additional keyword arguments,
|
||||
excluding the learning rate, to pass to the optimizer
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
observation_space: gym.spaces.Space,
|
||||
action_space: gym.spaces.Space,
|
||||
lr_schedule: Schedule,
|
||||
net_arch: Optional[List[int]] = None,
|
||||
activation_fn: Type[nn.Module] = nn.ReLU,
|
||||
features_extractor_class: Type[BaseFeaturesExtractor] = FlattenExtractor,
|
||||
features_extractor_kwargs: Optional[Dict[str, Any]] = None,
|
||||
normalize_images: bool = True,
|
||||
optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
|
||||
optimizer_kwargs: Optional[Dict[str, Any]] = None,
|
||||
):
|
||||
super(DQNPolicy, self).__init__(
|
||||
observation_space,
|
||||
action_space,
|
||||
features_extractor_class,
|
||||
features_extractor_kwargs,
|
||||
optimizer_class=optimizer_class,
|
||||
optimizer_kwargs=optimizer_kwargs,
|
||||
)
|
||||
|
||||
if net_arch is None:
|
||||
if features_extractor_class == FlattenExtractor:
|
||||
net_arch = [64, 64]
|
||||
else:
|
||||
net_arch = []
|
||||
|
||||
self.net_arch = net_arch
|
||||
self.activation_fn = activation_fn
|
||||
self.normalize_images = normalize_images
|
||||
|
||||
self.net_args = {
|
||||
"observation_space": self.observation_space,
|
||||
"action_space": self.action_space,
|
||||
"net_arch": self.net_arch,
|
||||
"activation_fn": self.activation_fn,
|
||||
"normalize_images": normalize_images,
|
||||
}
|
||||
|
||||
self.q_net, self.q_net_target = None, None
|
||||
self._build(lr_schedule)
|
||||
|
||||
def _build(self, lr_schedule: Schedule) -> None:
|
||||
"""
|
||||
Create the network and the optimizer.
|
||||
|
||||
:param lr_schedule: Learning rate schedule
|
||||
lr_schedule(1) is the initial learning rate
|
||||
"""
|
||||
|
||||
self.q_net = self.make_q_net()
|
||||
self.q_net_target = self.make_q_net()
|
||||
self.q_net_target.load_state_dict(self.q_net.state_dict())
|
||||
|
||||
# Setup optimizer with initial learning rate
|
||||
self.optimizer = self.optimizer_class(self.parameters(), lr=lr_schedule(1), **self.optimizer_kwargs)
|
||||
|
||||
def make_q_net(self) -> QNetwork:
|
||||
# Make sure we always have separate networks for features extractors etc
|
||||
net_args = self._update_features_extractor(self.net_args, features_extractor=None)
|
||||
return QNetwork(**net_args).to(self.device)
|
||||
|
||||
def forward(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
|
||||
return self._predict(obs, deterministic=deterministic)
|
||||
|
||||
def _predict(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
|
||||
return self.q_net._predict(obs, deterministic=deterministic)
|
||||
|
||||
def _get_constructor_parameters(self) -> Dict[str, Any]:
|
||||
data = super()._get_constructor_parameters()
|
||||
|
||||
data.update(
|
||||
dict(
|
||||
net_arch=self.net_args["net_arch"],
|
||||
activation_fn=self.net_args["activation_fn"],
|
||||
lr_schedule=self._dummy_schedule, # dummy lr schedule, not needed for loading policy alone
|
||||
optimizer_class=self.optimizer_class,
|
||||
optimizer_kwargs=self.optimizer_kwargs,
|
||||
features_extractor_class=self.features_extractor_class,
|
||||
features_extractor_kwargs=self.features_extractor_kwargs,
|
||||
)
|
||||
)
|
||||
return data
|
||||
|
||||
|
||||
MlpPolicy = DQNPolicy
|
||||
|
||||
|
||||
class CnnPolicy(DQNPolicy):
|
||||
"""
|
||||
Policy class for DQN when using images as input.
|
||||
|
||||
:param observation_space: Observation space
|
||||
:param action_space: Action space
|
||||
:param lr_schedule: Learning rate schedule (could be constant)
|
||||
:param net_arch: The specification of the policy and value networks.
|
||||
:param activation_fn: Activation function
|
||||
:param features_extractor_class: Features extractor to use.
|
||||
:param normalize_images: Whether to normalize images or not,
|
||||
dividing by 255.0 (True by default)
|
||||
:param optimizer_class: The optimizer to use,
|
||||
``th.optim.Adam`` by default
|
||||
:param optimizer_kwargs: Additional keyword arguments,
|
||||
excluding the learning rate, to pass to the optimizer
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
observation_space: gym.spaces.Space,
|
||||
action_space: gym.spaces.Space,
|
||||
lr_schedule: Schedule,
|
||||
net_arch: Optional[List[int]] = None,
|
||||
activation_fn: Type[nn.Module] = nn.ReLU,
|
||||
features_extractor_class: Type[BaseFeaturesExtractor] = NatureCNN,
|
||||
features_extractor_kwargs: Optional[Dict[str, Any]] = None,
|
||||
normalize_images: bool = True,
|
||||
optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
|
||||
optimizer_kwargs: Optional[Dict[str, Any]] = None,
|
||||
):
|
||||
super(CnnPolicy, self).__init__(
|
||||
observation_space,
|
||||
action_space,
|
||||
lr_schedule,
|
||||
net_arch,
|
||||
activation_fn,
|
||||
features_extractor_class,
|
||||
features_extractor_kwargs,
|
||||
normalize_images,
|
||||
optimizer_class,
|
||||
optimizer_kwargs,
|
||||
)
|
||||
|
||||
|
||||
register_policy("MlpPolicy", MlpPolicy)
|
||||
register_policy("CnnPolicy", CnnPolicy)
|
||||
@@ -1,2 +0,0 @@
|
||||
github stablebaseline3
|
||||
https://github.com/DLR-RM/stable-baselines3
|
||||
@@ -1,27 +0,0 @@
|
||||
"In practice, we found that a high-entropy initial state is more likely to increase the speed of training.
|
||||
The entropy is calculated by:
|
||||
$$H=-\sum_{k= 1}^{n_k} p(k) \cdot \log p(k), p(k)=\frac{|A_k|}{|\mathcal{A}|}$$
|
||||
where $H$ is the entropy, $|A_k|$ is the number of agent nodes in $k$-th cluster, $|\mathcal{A}|$ is the total number of agents.
|
||||
To ensure the Cooperation Graph initialization has higher entropy,
|
||||
we will randomly generate multiple initial states,
|
||||
rank by their entropy and then pick the one with maximum $H$."
|
||||
|
||||
```
|
||||
FROM ubuntu:latest
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y python3 python3-pip && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN echo '[global]' > /etc/pip.conf && \
|
||||
echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
|
||||
echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
|
||||
|
||||
RUN pip3 install gradio requests[socks] mdtex2html
|
||||
|
||||
COPY . /gpt
|
||||
WORKDIR /gpt
|
||||
|
||||
|
||||
CMD ["python3", "main.py"]
|
||||
```
|
||||
114
crazy_functions/vt_fns/vt_call_plugin.py
Normal file
114
crazy_functions/vt_fns/vt_call_plugin.py
Normal file
@@ -0,0 +1,114 @@
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List
|
||||
from toolbox import update_ui_lastest_msg, disable_auto_promotion
|
||||
from request_llm.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
|
||||
import copy, json, pickle, os, sys, time
|
||||
|
||||
|
||||
def read_avail_plugin_enum():
|
||||
from crazy_functional import get_crazy_functions
|
||||
plugin_arr = get_crazy_functions()
|
||||
# remove plugins with out explaination
|
||||
plugin_arr = {k:v for k, v in plugin_arr.items() if 'Info' in v}
|
||||
plugin_arr_info = {"F_{:04d}".format(i):v["Info"] for i, v in enumerate(plugin_arr.values(), start=1)}
|
||||
plugin_arr_dict = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
|
||||
plugin_arr_dict_parse = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
|
||||
plugin_arr_dict_parse.update({f"F_{i}":v for i, v in enumerate(plugin_arr.values(), start=1)})
|
||||
prompt = json.dumps(plugin_arr_info, ensure_ascii=False, indent=2)
|
||||
prompt = "\n\nThe defination of PluginEnum:\nPluginEnum=" + prompt
|
||||
return prompt, plugin_arr_dict, plugin_arr_dict_parse
|
||||
|
||||
def wrap_code(txt):
|
||||
txt = txt.replace('```','')
|
||||
return f"\n```\n{txt}\n```\n"
|
||||
|
||||
def have_any_recent_upload_files(chatbot):
|
||||
_5min = 5 * 60
|
||||
if not chatbot: return False # chatbot is None
|
||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
||||
if not most_recent_uploaded: return False # most_recent_uploaded is None
|
||||
if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
|
||||
else: return False # most_recent_uploaded is too old
|
||||
|
||||
def get_recent_file_prompt_support(chatbot):
|
||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
||||
path = most_recent_uploaded['path']
|
||||
prompt = "\nAdditional Information:\n"
|
||||
prompt = "In case that this plugin requires a path or a file as argument,"
|
||||
prompt += f"it is important for you to know that the user has recently uploaded a file, located at: `{path}`"
|
||||
prompt += f"Only use it when necessary, otherwise, you can ignore this file."
|
||||
return prompt
|
||||
|
||||
def get_inputs_show_user(inputs, plugin_arr_enum_prompt):
|
||||
# remove plugin_arr_enum_prompt from inputs string
|
||||
inputs_show_user = inputs.replace(plugin_arr_enum_prompt, "")
|
||||
inputs_show_user += plugin_arr_enum_prompt[:200] + '...'
|
||||
inputs_show_user += '\n...\n'
|
||||
inputs_show_user += '...\n'
|
||||
inputs_show_user += '...}'
|
||||
return inputs_show_user
|
||||
|
||||
def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
|
||||
plugin_arr_enum_prompt, plugin_arr_dict, plugin_arr_dict_parse = read_avail_plugin_enum()
|
||||
class Plugin(BaseModel):
|
||||
plugin_selection: str = Field(description="The most related plugin from one of the PluginEnum.", default="F_0000")
|
||||
reason_of_selection: str = Field(description="The reason why you should select this plugin.", default="This plugin satisfy user requirement most")
|
||||
# ⭐ ⭐ ⭐ 选择插件
|
||||
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n查找可用插件中...", chatbot=chatbot, history=history, delay=0)
|
||||
gpt_json_io = GptJsonIO(Plugin)
|
||||
gpt_json_io.format_instructions = "The format of your output should be a json that can be parsed by json.loads.\n"
|
||||
gpt_json_io.format_instructions += """Output example: {"plugin_selection":"F_1234", "reason_of_selection":"F_1234 plugin satisfy user requirement most"}\n"""
|
||||
gpt_json_io.format_instructions += "The plugins you are authorized to use are listed below:\n"
|
||||
gpt_json_io.format_instructions += plugin_arr_enum_prompt
|
||||
inputs = "Choose the correct plugin according to user requirements, the user requirement is: \n\n" + \
|
||||
">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + gpt_json_io.format_instructions
|
||||
|
||||
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
|
||||
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
|
||||
try:
|
||||
gpt_reply = run_gpt_fn(inputs, "")
|
||||
plugin_sel = gpt_json_io.generate_output_auto_repair(gpt_reply, run_gpt_fn)
|
||||
except JsonStringError:
|
||||
msg = f"抱歉, {llm_kwargs['llm_model']}无法理解您的需求。"
|
||||
msg += "请求的Prompt为:\n" + wrap_code(get_inputs_show_user(inputs, plugin_arr_enum_prompt))
|
||||
msg += "语言模型回复为:\n" + wrap_code(gpt_reply)
|
||||
msg += "\n但您可以尝试再试一次\n"
|
||||
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
|
||||
return
|
||||
if plugin_sel.plugin_selection not in plugin_arr_dict_parse:
|
||||
msg = f"抱歉, 找不到合适插件执行该任务, 或者{llm_kwargs['llm_model']}无法理解您的需求。"
|
||||
msg += f"语言模型{llm_kwargs['llm_model']}选择了不存在的插件:\n" + wrap_code(gpt_reply)
|
||||
msg += "\n但您可以尝试再试一次\n"
|
||||
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
|
||||
return
|
||||
|
||||
# ⭐ ⭐ ⭐ 确认插件参数
|
||||
if not have_any_recent_upload_files(chatbot):
|
||||
appendix_info = ""
|
||||
else:
|
||||
appendix_info = get_recent_file_prompt_support(chatbot)
|
||||
|
||||
plugin = plugin_arr_dict_parse[plugin_sel.plugin_selection]
|
||||
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n提取插件参数...", chatbot=chatbot, history=history, delay=0)
|
||||
class PluginExplicit(BaseModel):
|
||||
plugin_selection: str = plugin_sel.plugin_selection
|
||||
plugin_arg: str = Field(description="The argument of the plugin.", default="")
|
||||
gpt_json_io = GptJsonIO(PluginExplicit)
|
||||
gpt_json_io.format_instructions += "The information about this plugin is:" + plugin["Info"]
|
||||
inputs = f"A plugin named {plugin_sel.plugin_selection} is selected, " + \
|
||||
"you should extract plugin_arg from the user requirement, the user requirement is: \n\n" + \
|
||||
">> " + (txt + appendix_info).rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
|
||||
gpt_json_io.format_instructions
|
||||
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
|
||||
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
|
||||
plugin_sel = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)
|
||||
|
||||
|
||||
# ⭐ ⭐ ⭐ 执行插件
|
||||
fn = plugin['Function']
|
||||
fn_name = fn.__name__
|
||||
msg = f'{llm_kwargs["llm_model"]}为您选择了插件: `{fn_name}`\n\n插件说明:{plugin["Info"]}\n\n插件参数:{plugin_sel.plugin_arg}\n\n假如偏离了您的要求,按停止键终止。'
|
||||
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
|
||||
yield from fn(plugin_sel.plugin_arg, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, -1)
|
||||
return
|
||||
81
crazy_functions/vt_fns/vt_modify_config.py
Normal file
81
crazy_functions/vt_fns/vt_modify_config.py
Normal file
@@ -0,0 +1,81 @@
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List
|
||||
from toolbox import update_ui_lastest_msg, get_conf
|
||||
from request_llm.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.json_fns.pydantic_io import GptJsonIO
|
||||
import copy, json, pickle, os, sys
|
||||
|
||||
|
||||
def modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
|
||||
ALLOW_RESET_CONFIG, = get_conf('ALLOW_RESET_CONFIG')
|
||||
if not ALLOW_RESET_CONFIG:
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"当前配置不允许被修改!如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
|
||||
chatbot=chatbot, history=history, delay=2
|
||||
)
|
||||
return
|
||||
|
||||
# ⭐ ⭐ ⭐ 读取可配置项目条目
|
||||
names = {}
|
||||
from enum import Enum
|
||||
import config
|
||||
for k, v in config.__dict__.items():
|
||||
if k.startswith('__'): continue
|
||||
names.update({k:k})
|
||||
# if len(names) > 20: break # 限制最多前10个配置项,如果太多了会导致gpt无法理解
|
||||
|
||||
ConfigOptions = Enum('ConfigOptions', names)
|
||||
class ModifyConfigurationIntention(BaseModel):
|
||||
which_config_to_modify: ConfigOptions = Field(description="the name of the configuration to modify, you must choose from one of the ConfigOptions enum.", default=None)
|
||||
new_option_value: str = Field(description="the new value of the option", default=None)
|
||||
|
||||
# ⭐ ⭐ ⭐ 分析用户意图
|
||||
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n读取新配置中", chatbot=chatbot, history=history, delay=0)
|
||||
gpt_json_io = GptJsonIO(ModifyConfigurationIntention)
|
||||
inputs = "Analyze how to change configuration according to following user input, answer me with json: \n\n" + \
|
||||
">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
|
||||
gpt_json_io.format_instructions
|
||||
|
||||
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
|
||||
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
|
||||
user_intention = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn)
|
||||
|
||||
explicit_conf = user_intention.which_config_to_modify.value
|
||||
|
||||
ok = (explicit_conf in txt)
|
||||
if ok:
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}",
|
||||
chatbot=chatbot, history=history, delay=1
|
||||
)
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n新配置{explicit_conf}={user_intention.new_option_value}\n\n正在修改配置中",
|
||||
chatbot=chatbot, history=history, delay=2
|
||||
)
|
||||
|
||||
# ⭐ ⭐ ⭐ 立即应用配置
|
||||
from toolbox import set_conf
|
||||
set_conf(explicit_conf, user_intention.new_option_value)
|
||||
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,重新页面即可生效。", chatbot=chatbot, history=history, delay=1
|
||||
)
|
||||
else:
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"失败,如果需要配置{explicit_conf},您需要明确说明并在指令中提到它。", chatbot=chatbot, history=history, delay=5
|
||||
)
|
||||
|
||||
def modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
|
||||
ALLOW_RESET_CONFIG, = get_conf('ALLOW_RESET_CONFIG')
|
||||
if not ALLOW_RESET_CONFIG:
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"当前配置不允许被修改!如需激活本功能,请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件。",
|
||||
chatbot=chatbot, history=history, delay=2
|
||||
)
|
||||
return
|
||||
|
||||
yield from modify_configuration_hot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n配置修改完成,五秒后即将重启!若出现报错请无视即可。", chatbot=chatbot, history=history, delay=5
|
||||
)
|
||||
os.execl(sys.executable, sys.executable, *sys.argv)
|
||||
28
crazy_functions/vt_fns/vt_state.py
Normal file
28
crazy_functions/vt_fns/vt_state.py
Normal file
@@ -0,0 +1,28 @@
|
||||
import pickle
|
||||
|
||||
class VoidTerminalState():
|
||||
def __init__(self):
|
||||
self.reset_state()
|
||||
|
||||
def reset_state(self):
|
||||
self.has_provided_explaination = False
|
||||
|
||||
def lock_plugin(self, chatbot):
|
||||
chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端'
|
||||
chatbot._cookies['plugin_state'] = pickle.dumps(self)
|
||||
|
||||
def unlock_plugin(self, chatbot):
|
||||
self.reset_state()
|
||||
chatbot._cookies['lock_plugin'] = None
|
||||
chatbot._cookies['plugin_state'] = pickle.dumps(self)
|
||||
|
||||
def set_state(self, chatbot, key, value):
|
||||
setattr(self, key, value)
|
||||
chatbot._cookies['plugin_state'] = pickle.dumps(self)
|
||||
|
||||
def get_state(chatbot):
|
||||
state = chatbot._cookies.get('plugin_state', None)
|
||||
if state is not None: state = pickle.loads(state)
|
||||
else: state = VoidTerminalState()
|
||||
state.chatbot = chatbot
|
||||
return state
|
||||
@@ -144,11 +144,11 @@ def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, hi
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import pdfminer, bs4
|
||||
import bs4
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
a = f"解析项目: {txt}",
|
||||
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。")
|
||||
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
63
crazy_functions/交互功能函数模板.py
Normal file
63
crazy_functions/交互功能函数模板.py
Normal file
@@ -0,0 +1,63 @@
|
||||
from toolbox import CatchException, update_ui
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
|
||||
|
||||
@CatchException
|
||||
def 交互功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数, 如温度和top_p等, 一般原样传递下去就行
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append(("这是什么功能?", "交互功能函数模板。在执行完成之后, 可以将自身的状态存储到cookie中, 等待用户的再次调用。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
state = chatbot._cookies.get('plugin_state_0001', None) # 初始化插件状态
|
||||
|
||||
if state is None:
|
||||
chatbot._cookies['lock_plugin'] = 'crazy_functions.交互功能函数模板->交互功能模板函数' # 赋予插件锁定 锁定插件回调路径,当下一次用户提交时,会直接转到该函数
|
||||
chatbot._cookies['plugin_state_0001'] = 'wait_user_keyword' # 赋予插件状态
|
||||
|
||||
chatbot.append(("第一次调用:", "请输入关键词, 我将为您查找相关壁纸, 建议使用英文单词, 插件锁定中,请直接提交即可。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
if state == 'wait_user_keyword':
|
||||
chatbot._cookies['lock_plugin'] = None # 解除插件锁定,避免遗忘导致死锁
|
||||
chatbot._cookies['plugin_state_0001'] = None # 解除插件状态,避免遗忘导致死锁
|
||||
|
||||
# 解除插件锁定
|
||||
chatbot.append((f"获取关键词:{txt}", ""))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
page_return = get_image_page_by_keyword(txt)
|
||||
inputs=inputs_show_user=f"Extract all image urls in this html page, pick the first 5 images and show them with markdown format: \n\n {page_return}"
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=inputs, inputs_show_user=inputs_show_user,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||
sys_prompt="When you want to show an image, use markdown format. e.g. . If there are no image url provided, answer 'no image url provided'"
|
||||
)
|
||||
chatbot[-1] = [chatbot[-1][0], gpt_say]
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------------
|
||||
|
||||
def get_image_page_by_keyword(keyword):
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
response = requests.get(f'https://wallhaven.cc/search?q={keyword}', timeout=2)
|
||||
res = "image urls: \n"
|
||||
for image_element in BeautifulSoup(response.content, 'html.parser').findAll("img"):
|
||||
try:
|
||||
res += image_element["data-src"]
|
||||
res += "\n"
|
||||
except:
|
||||
pass
|
||||
return res
|
||||
31
crazy_functions/命令行助手.py
Normal file
31
crazy_functions/命令行助手.py
Normal file
@@ -0,0 +1,31 @@
|
||||
from toolbox import CatchException, update_ui, gen_time_str
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from .crazy_utils import input_clipping
|
||||
import copy, json
|
||||
|
||||
@CatchException
|
||||
def 命令行助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本, 例如需要翻译的一段话, 再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数, 暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄, 用于显示给用户
|
||||
history 聊天历史, 前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
# 清空历史, 以免输入溢出
|
||||
history = []
|
||||
|
||||
# 输入
|
||||
i_say = "请写bash命令实现以下功能:" + txt
|
||||
# 开始
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say, inputs_show_user=txt,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||
sys_prompt="你是一个Linux大师级用户。注意,当我要求你写bash命令时,尽可能地仅用一行命令解决我的要求。"
|
||||
)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||
|
||||
|
||||
|
||||
69
crazy_functions/图片生成.py
Normal file
69
crazy_functions/图片生成.py
Normal file
@@ -0,0 +1,69 @@
|
||||
from toolbox import CatchException, update_ui, get_conf, select_api_key
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
import datetime
|
||||
|
||||
|
||||
def gen_image(llm_kwargs, prompt, resolution="256x256"):
|
||||
import requests, json, time, os
|
||||
from request_llm.bridge_all import model_info
|
||||
|
||||
proxies, = get_conf('proxies')
|
||||
# Set up OpenAI API key and model
|
||||
api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
|
||||
chat_endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
||||
# 'https://api.openai.com/v1/chat/completions'
|
||||
img_endpoint = chat_endpoint.replace('chat/completions','images/generations')
|
||||
# # Generate the image
|
||||
url = img_endpoint
|
||||
headers = {
|
||||
'Authorization': f"Bearer {api_key}",
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
data = {
|
||||
'prompt': prompt,
|
||||
'n': 1,
|
||||
'size': resolution,
|
||||
'response_format': 'url'
|
||||
}
|
||||
response = requests.post(url, headers=headers, json=data, proxies=proxies)
|
||||
print(response.content)
|
||||
try:
|
||||
image_url = json.loads(response.content.decode('utf8'))['data'][0]['url']
|
||||
except:
|
||||
raise RuntimeError(response.content.decode())
|
||||
# 文件保存到本地
|
||||
r = requests.get(image_url, proxies=proxies)
|
||||
file_path = 'gpt_log/image_gen/'
|
||||
os.makedirs(file_path, exist_ok=True)
|
||||
file_name = 'Image' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.png'
|
||||
with open(file_path+file_name, 'wb+') as f: f.write(r.content)
|
||||
|
||||
|
||||
return image_url, file_path+file_name
|
||||
|
||||
|
||||
|
||||
@CatchException
|
||||
def 图片生成(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append(("这是什么功能?", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
resolution = plugin_kwargs.get("advanced_arg", '256x256')
|
||||
image_url, image_path = gen_image(llm_kwargs, prompt, resolution)
|
||||
chatbot.append([prompt,
|
||||
f'图像中转网址: <br/>`{image_url}`<br/>'+
|
||||
f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
|
||||
f'本地文件地址: <br/>`{image_path}`<br/>'+
|
||||
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
|
||||
])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||
142
crazy_functions/对话历史存档.py
Normal file
142
crazy_functions/对话历史存档.py
Normal file
@@ -0,0 +1,142 @@
|
||||
from toolbox import CatchException, update_ui, promote_file_to_downloadzone
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
import re
|
||||
|
||||
def write_chat_to_file(chatbot, history=None, file_name=None):
|
||||
"""
|
||||
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
|
||||
"""
|
||||
import os
|
||||
import time
|
||||
if file_name is None:
|
||||
file_name = 'chatGPT对话历史' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
|
||||
os.makedirs('./gpt_log/', exist_ok=True)
|
||||
with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
|
||||
from themes.theme import advanced_css
|
||||
f.write(f'<!DOCTYPE html><head><meta charset="utf-8"><title>对话历史</title><style>{advanced_css}</style></head>')
|
||||
for i, contents in enumerate(chatbot):
|
||||
for j, content in enumerate(contents):
|
||||
try: # 这个bug没找到触发条件,暂时先这样顶一下
|
||||
if type(content) != str: content = str(content)
|
||||
except:
|
||||
continue
|
||||
f.write(content)
|
||||
if j == 0:
|
||||
f.write('<hr style="border-top: dotted 3px #ccc;">')
|
||||
f.write('<hr color="red"> \n\n')
|
||||
f.write('<hr color="blue"> \n\n raw chat context:\n')
|
||||
f.write('<code>')
|
||||
for h in history:
|
||||
f.write("\n>>>" + h)
|
||||
f.write('</code>')
|
||||
promote_file_to_downloadzone(f'./gpt_log/{file_name}', rename_file=file_name, chatbot=chatbot)
|
||||
return '对话历史写入:' + os.path.abspath(f'./gpt_log/{file_name}')
|
||||
|
||||
def gen_file_preview(file_name):
|
||||
try:
|
||||
with open(file_name, 'r', encoding='utf8') as f:
|
||||
file_content = f.read()
|
||||
# pattern to match the text between <head> and </head>
|
||||
pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL)
|
||||
file_content = re.sub(pattern, '', file_content)
|
||||
html, history = file_content.split('<hr color="blue"> \n\n raw chat context:\n')
|
||||
history = history.strip('<code>')
|
||||
history = history.strip('</code>')
|
||||
history = history.split("\n>>>")
|
||||
return list(filter(lambda x:x!="", history))[0][:100]
|
||||
except:
|
||||
return ""
|
||||
|
||||
def read_file_to_chat(chatbot, history, file_name):
|
||||
with open(file_name, 'r', encoding='utf8') as f:
|
||||
file_content = f.read()
|
||||
# pattern to match the text between <head> and </head>
|
||||
pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL)
|
||||
file_content = re.sub(pattern, '', file_content)
|
||||
html, history = file_content.split('<hr color="blue"> \n\n raw chat context:\n')
|
||||
history = history.strip('<code>')
|
||||
history = history.strip('</code>')
|
||||
history = history.split("\n>>>")
|
||||
history = list(filter(lambda x:x!="", history))
|
||||
html = html.split('<hr color="red"> \n\n')
|
||||
html = list(filter(lambda x:x!="", html))
|
||||
chatbot.clear()
|
||||
for i, h in enumerate(html):
|
||||
i_say, gpt_say = h.split('<hr style="border-top: dotted 3px #ccc;">')
|
||||
chatbot.append([i_say, gpt_say])
|
||||
chatbot.append([f"存档文件详情?", f"[Local Message] 载入对话{len(html)}条,上下文{len(history)}条。"])
|
||||
return chatbot, history
|
||||
|
||||
@CatchException
|
||||
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
|
||||
chatbot.append(("保存当前对话",
|
||||
f"[Local Message] {write_chat_to_file(chatbot, history)},您可以调用“载入对话历史存档”还原当下的对话。\n警告!被保存的对话历史可以被使用该系统的任何人查阅。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
|
||||
def hide_cwd(str):
|
||||
import os
|
||||
current_path = os.getcwd()
|
||||
replace_path = "."
|
||||
return str.replace(current_path, replace_path)
|
||||
|
||||
@CatchException
|
||||
def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
from .crazy_utils import get_files_from_everything
|
||||
success, file_manifest, _ = get_files_from_everything(txt, type='.html')
|
||||
|
||||
if not success:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
import glob
|
||||
local_history = "<br/>".join(["`"+hide_cwd(f)+f" ({gen_file_preview(f)})"+"`" for f in glob.glob(f'gpt_log/**/chatGPT对话历史*.html', recursive=True)])
|
||||
chatbot.append([f"正在查找对话历史文件(html格式): {txt}", f"找不到任何html文件: {txt}。但本地存储了以下历史文件,您可以将任意一个文件路径粘贴到输入区,然后重试:<br/>{local_history}"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
try:
|
||||
chatbot, history = read_file_to_chat(chatbot, history, file_manifest[0])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
except:
|
||||
chatbot.append([f"载入对话历史文件", f"对话历史文件损坏!"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
@CatchException
|
||||
def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
|
||||
import glob, os
|
||||
local_history = "<br/>".join(["`"+hide_cwd(f)+"`" for f in glob.glob(f'gpt_log/**/chatGPT对话历史*.html', recursive=True)])
|
||||
for f in glob.glob(f'gpt_log/**/chatGPT对话历史*.html', recursive=True):
|
||||
os.remove(f)
|
||||
chatbot.append([f"删除所有历史对话文件", f"已删除<br/>{local_history}"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
@@ -14,17 +14,19 @@ def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot
|
||||
doc = Document(fp)
|
||||
file_content = "\n".join([para.text for para in doc.paragraphs])
|
||||
else:
|
||||
import win32com.client
|
||||
word = win32com.client.Dispatch("Word.Application")
|
||||
word.visible = False
|
||||
# 打开文件
|
||||
print('fp', os.getcwd())
|
||||
doc = word.Documents.Open(os.getcwd() + '/' + fp)
|
||||
# file_content = doc.Content.Text
|
||||
doc = word.ActiveDocument
|
||||
file_content = doc.Range().Text
|
||||
doc.Close()
|
||||
word.Quit()
|
||||
try:
|
||||
import win32com.client
|
||||
word = win32com.client.Dispatch("Word.Application")
|
||||
word.visible = False
|
||||
# 打开文件
|
||||
doc = word.Documents.Open(os.getcwd() + '/' + fp)
|
||||
# file_content = doc.Content.Text
|
||||
doc = word.ActiveDocument
|
||||
file_content = doc.Range().Text
|
||||
doc.Close()
|
||||
word.Quit()
|
||||
except:
|
||||
raise RuntimeError('请先将.doc文档转换为.docx文档。')
|
||||
|
||||
print(file_content)
|
||||
# private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名
|
||||
@@ -85,7 +87,7 @@ def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_pr
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"批量总结Word文档。函数插件贡献者: JasonGuo1"])
|
||||
"批量总结Word文档。函数插件贡献者: JasonGuo1。注意, 如果是.doc文件, 请先转化为.docx格式。"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
|
||||
184
crazy_functions/总结音视频.py
Normal file
184
crazy_functions/总结音视频.py
Normal file
@@ -0,0 +1,184 @@
|
||||
from toolbox import CatchException, report_execption, select_api_key, update_ui, write_results_to_file, get_conf
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
|
||||
def split_audio_file(filename, split_duration=1000):
|
||||
"""
|
||||
根据给定的切割时长将音频文件切割成多个片段。
|
||||
|
||||
Args:
|
||||
filename (str): 需要被切割的音频文件名。
|
||||
split_duration (int, optional): 每个切割音频片段的时长(以秒为单位)。默认值为1000。
|
||||
|
||||
Returns:
|
||||
filelist (list): 一个包含所有切割音频片段文件路径的列表。
|
||||
|
||||
"""
|
||||
from moviepy.editor import AudioFileClip
|
||||
import os
|
||||
os.makedirs('gpt_log/mp3/cut/', exist_ok=True) # 创建存储切割音频的文件夹
|
||||
|
||||
# 读取音频文件
|
||||
audio = AudioFileClip(filename)
|
||||
|
||||
# 计算文件总时长和切割点
|
||||
total_duration = audio.duration
|
||||
split_points = list(range(0, int(total_duration), split_duration))
|
||||
split_points.append(int(total_duration))
|
||||
filelist = []
|
||||
|
||||
# 切割音频文件
|
||||
for i in range(len(split_points) - 1):
|
||||
start_time = split_points[i]
|
||||
end_time = split_points[i + 1]
|
||||
split_audio = audio.subclip(start_time, end_time)
|
||||
split_audio.write_audiofile(f"gpt_log/mp3/cut/{filename[0]}_{i}.mp3")
|
||||
filelist.append(f"gpt_log/mp3/cut/{filename[0]}_{i}.mp3")
|
||||
|
||||
audio.close()
|
||||
return filelist
|
||||
|
||||
def AnalyAudio(parse_prompt, file_manifest, llm_kwargs, chatbot, history):
|
||||
import os, requests
|
||||
from moviepy.editor import AudioFileClip
|
||||
from request_llm.bridge_all import model_info
|
||||
|
||||
# 设置OpenAI密钥和模型
|
||||
api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
|
||||
chat_endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
||||
|
||||
whisper_endpoint = chat_endpoint.replace('chat/completions', 'audio/transcriptions')
|
||||
url = whisper_endpoint
|
||||
headers = {
|
||||
'Authorization': f"Bearer {api_key}"
|
||||
}
|
||||
|
||||
os.makedirs('gpt_log/mp3/', exist_ok=True)
|
||||
for index, fp in enumerate(file_manifest):
|
||||
audio_history = []
|
||||
# 提取文件扩展名
|
||||
ext = os.path.splitext(fp)[1]
|
||||
# 提取视频中的音频
|
||||
if ext not in [".mp3", ".wav", ".m4a", ".mpga"]:
|
||||
audio_clip = AudioFileClip(fp)
|
||||
audio_clip.write_audiofile(f'gpt_log/mp3/output{index}.mp3')
|
||||
fp = f'gpt_log/mp3/output{index}.mp3'
|
||||
# 调用whisper模型音频转文字
|
||||
voice = split_audio_file(fp)
|
||||
for j, i in enumerate(voice):
|
||||
with open(i, 'rb') as f:
|
||||
file_content = f.read() # 读取文件内容到内存
|
||||
files = {
|
||||
'file': (os.path.basename(i), file_content),
|
||||
}
|
||||
data = {
|
||||
"model": "whisper-1",
|
||||
"prompt": parse_prompt,
|
||||
'response_format': "text"
|
||||
}
|
||||
|
||||
chatbot.append([f"将 {i} 发送到openai音频解析终端 (whisper),当前参数:{parse_prompt}", "正在处理 ..."])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
proxies, = get_conf('proxies')
|
||||
response = requests.post(url, headers=headers, files=files, data=data, proxies=proxies).text
|
||||
|
||||
chatbot.append(["音频解析结果", response])
|
||||
history.extend(["音频解析结果", response])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
i_say = f'请对下面的音频片段做概述,音频内容是 ```{response}```'
|
||||
i_say_show_user = f'第{index + 1}段音频的第{j + 1} / {len(voice)}片段。'
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say,
|
||||
inputs_show_user=i_say_show_user,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history=[],
|
||||
sys_prompt=f"总结音频。音频文件名{fp}"
|
||||
)
|
||||
|
||||
chatbot[-1] = (i_say_show_user, gpt_say)
|
||||
history.extend([i_say_show_user, gpt_say])
|
||||
audio_history.extend([i_say_show_user, gpt_say])
|
||||
|
||||
# 已经对该文章的所有片段总结完毕,如果文章被切分了
|
||||
result = "".join(audio_history)
|
||||
if len(audio_history) > 1:
|
||||
i_say = f"根据以上的对话,使用中文总结音频“{result}”的主要内容。"
|
||||
i_say_show_user = f'第{index + 1}段音频的主要内容:'
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say,
|
||||
inputs_show_user=i_say_show_user,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history=audio_history,
|
||||
sys_prompt="总结文章。"
|
||||
)
|
||||
|
||||
history.extend([i_say, gpt_say])
|
||||
audio_history.extend([i_say, gpt_say])
|
||||
|
||||
res = write_results_to_file(history)
|
||||
chatbot.append((f"第{index + 1}段音频完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 删除中间文件夹
|
||||
import shutil
|
||||
shutil.rmtree('gpt_log/mp3')
|
||||
res = write_results_to_file(history)
|
||||
chatbot.append(("所有音频都总结完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
|
||||
@CatchException
|
||||
def 总结音视频(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, WEB_PORT):
|
||||
import glob, os
|
||||
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"总结音视频内容,函数插件贡献者: dalvqw & BinaryHusky"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
try:
|
||||
from moviepy.editor import AudioFileClip
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade moviepy```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# 清空历史,以免输入溢出
|
||||
history = []
|
||||
|
||||
# 检测输入参数,如没有给定输入参数,直接退出
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# 搜索需要处理的文件清单
|
||||
extensions = ['.mp4', '.m4a', '.wav', '.mpga', '.mpeg', '.mp3', '.avi', '.mkv', '.flac', '.aac']
|
||||
|
||||
if txt.endswith(tuple(extensions)):
|
||||
file_manifest = [txt]
|
||||
else:
|
||||
file_manifest = []
|
||||
for extension in extensions:
|
||||
file_manifest.extend(glob.glob(f'{project_folder}/**/*{extension}', recursive=True))
|
||||
|
||||
# 如果没找到任何文件
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何音频或视频文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# 开始正式执行任务
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
parse_prompt = plugin_kwargs.get("advanced_arg", '将音频解析为简体中文')
|
||||
yield from AnalyAudio(parse_prompt, file_manifest, llm_kwargs, chatbot, history)
|
||||
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
@@ -1,5 +1,7 @@
|
||||
from toolbox import update_ui
|
||||
from toolbox import CatchException, report_execption, write_results_to_file
|
||||
import glob, time, os, re
|
||||
from toolbox import update_ui, trimmed_format_exc, gen_time_str, disable_auto_promotion
|
||||
from toolbox import CatchException, report_execption, write_history_to_file
|
||||
from toolbox import promote_file_to_downloadzone, get_log_folder
|
||||
fast_debug = False
|
||||
|
||||
class PaperFileGroup():
|
||||
@@ -32,11 +34,23 @@ class PaperFileGroup():
|
||||
self.sp_file_contents.append(segment)
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.md")
|
||||
|
||||
print('Segmentation: done')
|
||||
|
||||
def merge_result(self):
|
||||
self.file_result = ["" for _ in range(len(self.file_paths))]
|
||||
for r, k in zip(self.sp_file_result, self.sp_file_index):
|
||||
self.file_result[k] += r
|
||||
|
||||
def write_result(self, language):
|
||||
manifest = []
|
||||
for path, res in zip(self.file_paths, self.file_result):
|
||||
dst_file = os.path.join(get_log_folder(), f'{gen_time_str()}.md')
|
||||
with open(dst_file, 'w', encoding='utf8') as f:
|
||||
manifest.append(dst_file)
|
||||
f.write(res)
|
||||
return manifest
|
||||
|
||||
def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
|
||||
import time, os, re
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
|
||||
# <-------- 读取Markdown文件,删除其中的所有注释 ---------->
|
||||
@@ -53,7 +67,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
pfg.run_file_split(max_token_limit=1500)
|
||||
n_split = len(pfg.sp_file_contents)
|
||||
|
||||
# <-------- 多线程润色开始 ---------->
|
||||
# <-------- 多线程翻译开始 ---------->
|
||||
if language == 'en->zh':
|
||||
inputs_array = ["This is a Markdown file, translate it into Chinese, do not modify any existing Markdown commands:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
@@ -64,6 +78,11 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
|
||||
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
|
||||
else:
|
||||
inputs_array = [f"This is a Markdown file, translate it into {language}, do not modify any existing Markdown commands, only answer me with translated results:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
|
||||
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
|
||||
|
||||
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array=inputs_array,
|
||||
@@ -75,16 +94,62 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
# max_workers=5, # OpenAI所允许的最大并行过载
|
||||
scroller_max_len = 80
|
||||
)
|
||||
try:
|
||||
pfg.sp_file_result = []
|
||||
for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]):
|
||||
pfg.sp_file_result.append(gpt_say)
|
||||
pfg.merge_result()
|
||||
pfg.write_result(language)
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
|
||||
# <-------- 整理结果,退出 ---------->
|
||||
create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
|
||||
res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name)
|
||||
create_report_file_name = gen_time_str() + f"-chatgpt.md"
|
||||
res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name)
|
||||
promote_file_to_downloadzone(res, chatbot=chatbot)
|
||||
history = gpt_response_collection
|
||||
chatbot.append((f"{fp}完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
def get_files_from_everything(txt, preference=''):
|
||||
if txt == "": return False, None, None
|
||||
success = True
|
||||
if txt.startswith('http'):
|
||||
import requests
|
||||
from toolbox import get_conf
|
||||
proxies, = get_conf('proxies')
|
||||
# 网络的远程文件
|
||||
if preference == 'Github':
|
||||
print('正在从github下载资源 ...')
|
||||
if not txt.endswith('.md'):
|
||||
# Make a request to the GitHub API to retrieve the repository information
|
||||
url = txt.replace("https://github.com/", "https://api.github.com/repos/") + '/readme'
|
||||
response = requests.get(url, proxies=proxies)
|
||||
txt = response.json()['download_url']
|
||||
else:
|
||||
txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/")
|
||||
txt = txt.replace("/blob/", "/")
|
||||
|
||||
r = requests.get(txt, proxies=proxies)
|
||||
download_local = f'{get_log_folder(plugin_name="批量Markdown翻译")}/raw-readme-{gen_time_str()}.md'
|
||||
project_folder = f'{get_log_folder(plugin_name="批量Markdown翻译")}'
|
||||
with open(download_local, 'wb+') as f: f.write(r.content)
|
||||
file_manifest = [download_local]
|
||||
elif txt.endswith('.md'):
|
||||
# 直接给定文件
|
||||
file_manifest = [txt]
|
||||
project_folder = os.path.dirname(txt)
|
||||
elif os.path.exists(txt):
|
||||
# 本地路径,递归搜索
|
||||
project_folder = txt
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
|
||||
else:
|
||||
project_folder = None
|
||||
file_manifest = []
|
||||
success = False
|
||||
|
||||
return success, file_manifest, project_folder
|
||||
|
||||
|
||||
@CatchException
|
||||
@@ -94,6 +159,7 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
"函数插件功能?",
|
||||
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
disable_auto_promotion(chatbot)
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
@@ -105,19 +171,21 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
|
||||
success, file_manifest, project_folder = get_files_from_everything(txt, preference="Github")
|
||||
|
||||
if not success:
|
||||
# 什么都没有
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
|
||||
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en->zh')
|
||||
|
||||
|
||||
@@ -131,6 +199,7 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
"函数插件功能?",
|
||||
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
disable_auto_promotion(chatbot)
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
@@ -142,20 +211,51 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
success, file_manifest, project_folder = get_files_from_everything(txt)
|
||||
if not success:
|
||||
# 什么都没有
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
if txt.endswith('.md'):
|
||||
file_manifest = [txt]
|
||||
else:
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh->en')
|
||||
|
||||
|
||||
@CatchException
|
||||
def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"对整个Markdown项目进行翻译。函数插件贡献者: Binary-Husky"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
disable_auto_promotion(chatbot)
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import tiktoken
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
success, file_manifest, project_folder = get_files_from_everything(txt)
|
||||
if not success:
|
||||
# 什么都没有
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
language = plugin_kwargs.get("advanced_arg", 'Chinese')
|
||||
yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language=language)
|
||||
@@ -1,121 +1,107 @@
|
||||
from toolbox import update_ui
|
||||
from toolbox import update_ui, promote_file_to_downloadzone, gen_time_str
|
||||
from toolbox import CatchException, report_execption, write_results_to_file
|
||||
import re
|
||||
import unicodedata
|
||||
fast_debug = False
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from .crazy_utils import read_and_clean_pdf_text
|
||||
from .crazy_utils import input_clipping
|
||||
|
||||
def is_paragraph_break(match):
|
||||
"""
|
||||
根据给定的匹配结果来判断换行符是否表示段落分隔。
|
||||
如果换行符前为句子结束标志(句号,感叹号,问号),且下一个字符为大写字母,则换行符更有可能表示段落分隔。
|
||||
也可以根据之前的内容长度来判断段落是否已经足够长。
|
||||
"""
|
||||
prev_char, next_char = match.groups()
|
||||
|
||||
# 句子结束标志
|
||||
sentence_endings = ".!?"
|
||||
|
||||
# 设定一个最小段落长度阈值
|
||||
min_paragraph_length = 140
|
||||
|
||||
if prev_char in sentence_endings and next_char.isupper() and len(match.string[:match.start(1)]) > min_paragraph_length:
|
||||
return "\n\n"
|
||||
else:
|
||||
return " "
|
||||
|
||||
def normalize_text(text):
|
||||
"""
|
||||
通过把连字(ligatures)等文本特殊符号转换为其基本形式来对文本进行归一化处理。
|
||||
例如,将连字 "fi" 转换为 "f" 和 "i"。
|
||||
"""
|
||||
# 对文本进行归一化处理,分解连字
|
||||
normalized_text = unicodedata.normalize("NFKD", text)
|
||||
|
||||
# 替换其他特殊字符
|
||||
cleaned_text = re.sub(r'[^\x00-\x7F]+', '', normalized_text)
|
||||
|
||||
return cleaned_text
|
||||
|
||||
def clean_text(raw_text):
|
||||
"""
|
||||
对从 PDF 提取出的原始文本进行清洗和格式化处理。
|
||||
1. 对原始文本进行归一化处理。
|
||||
2. 替换跨行的连词,例如 “Espe-\ncially” 转换为 “Especially”。
|
||||
3. 根据 heuristic 规则判断换行符是否是段落分隔,并相应地进行替换。
|
||||
"""
|
||||
# 对文本进行归一化处理
|
||||
normalized_text = normalize_text(raw_text)
|
||||
|
||||
# 替换跨行的连词
|
||||
text = re.sub(r'(\w+-\n\w+)', lambda m: m.group(1).replace('-\n', ''), normalized_text)
|
||||
|
||||
# 根据前后相邻字符的特点,找到原文本中的换行符
|
||||
newlines = re.compile(r'(\S)\n(\S)')
|
||||
|
||||
# 根据 heuristic 规则,用空格或段落分隔符替换原换行符
|
||||
final_text = re.sub(newlines, lambda m: m.group(1) + is_paragraph_break(m) + m.group(2), text)
|
||||
|
||||
return final_text.strip()
|
||||
|
||||
def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
||||
import time, glob, os, fitz
|
||||
print('begin analysis on:', file_manifest)
|
||||
for index, fp in enumerate(file_manifest):
|
||||
with fitz.open(fp) as doc:
|
||||
file_content = ""
|
||||
for page in doc:
|
||||
file_content += page.get_text()
|
||||
file_content = clean_text(file_content)
|
||||
print(file_content)
|
||||
file_write_buffer = []
|
||||
for file_name in file_manifest:
|
||||
print('begin analysis on:', file_name)
|
||||
############################## <第 0 步,切割PDF> ##################################
|
||||
# 递归地切割PDF文件,每一块(尽量是完整的一个section,比如introduction,experiment等,必要时再进行切割)
|
||||
# 的长度必须小于 2500 个 Token
|
||||
file_content, page_one = read_and_clean_pdf_text(file_name) # (尝试)按照章节切割PDF
|
||||
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
|
||||
prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
|
||||
i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
|
||||
i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
|
||||
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
||||
|
||||
if not fast_debug:
|
||||
msg = '正常'
|
||||
# ** gpt request **
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say,
|
||||
inputs_show_user=i_say_show_user,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history=[],
|
||||
sys_prompt="总结文章。"
|
||||
) # 带超时倒计时
|
||||
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||
from request_llm.bridge_all import model_info
|
||||
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||
paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||
txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
|
||||
page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||
txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
|
||||
# 为了更好的效果,我们剥离Introduction之后的部分(如果有)
|
||||
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
|
||||
|
||||
############################## <第 1 步,从摘要中提取高价值信息,放到history中> ##################################
|
||||
final_results = []
|
||||
final_results.append(paper_meta)
|
||||
|
||||
chatbot[-1] = (i_say_show_user, gpt_say)
|
||||
history.append(i_say_show_user); history.append(gpt_say)
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
|
||||
if not fast_debug: time.sleep(2)
|
||||
############################## <第 2 步,迭代地历遍整个文章,提取精炼信息> ##################################
|
||||
i_say_show_user = f'首先你在中文语境下通读整篇论文。'; gpt_say = "[Local Message] 收到。" # 用户提示
|
||||
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=[]) # 更新UI
|
||||
|
||||
all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
|
||||
i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
|
||||
chatbot.append((i_say, "[Local Message] waiting gpt response."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
iteration_results = []
|
||||
last_iteration_result = paper_meta # 初始值是摘要
|
||||
MAX_WORD_TOTAL = 4096 * 0.7
|
||||
n_fragment = len(paper_fragments)
|
||||
if n_fragment >= 20: print('文章极长,不能达到预期效果')
|
||||
for i in range(n_fragment):
|
||||
NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment
|
||||
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i]}"
|
||||
i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} Chinese characters: {paper_fragments[i][:200]}"
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
|
||||
llm_kwargs, chatbot,
|
||||
history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果
|
||||
sys_prompt="Extract the main idea of this section with Chinese." # 提示
|
||||
)
|
||||
iteration_results.append(gpt_say)
|
||||
last_iteration_result = gpt_say
|
||||
|
||||
if not fast_debug:
|
||||
msg = '正常'
|
||||
# ** gpt request **
|
||||
############################## <第 3 步,整理history,提取总结> ##################################
|
||||
final_results.extend(iteration_results)
|
||||
final_results.append(f'Please conclude this paper discussed above。')
|
||||
# This prompt is from https://github.com/kaixindelele/ChatPaper/blob/main/chat_paper.py
|
||||
NUM_OF_WORD = 1000
|
||||
i_say = """
|
||||
1. Mark the title of the paper (with Chinese translation)
|
||||
2. list all the authors' names (use English)
|
||||
3. mark the first author's affiliation (output Chinese translation only)
|
||||
4. mark the keywords of this article (use English)
|
||||
5. link to the paper, Github code link (if available, fill in Github:None if not)
|
||||
6. summarize according to the following four points.Be sure to use Chinese answers (proper nouns need to be marked in English)
|
||||
- (1):What is the research background of this article?
|
||||
- (2):What are the past methods? What are the problems with them? Is the approach well motivated?
|
||||
- (3):What is the research methodology proposed in this paper?
|
||||
- (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals?
|
||||
Follow the format of the output that follows:
|
||||
1. Title: xxx\n\n
|
||||
2. Authors: xxx\n\n
|
||||
3. Affiliation: xxx\n\n
|
||||
4. Keywords: xxx\n\n
|
||||
5. Urls: xxx or xxx , xxx \n\n
|
||||
6. Summary: \n\n
|
||||
- (1):xxx;\n
|
||||
- (2):xxx;\n
|
||||
- (3):xxx;\n
|
||||
- (4):xxx.\n\n
|
||||
Be sure to use Chinese answers (proper nouns need to be marked in English), statements as concise and academic as possible,
|
||||
do not have too much repetitive information, numerical values using the original numbers.
|
||||
"""
|
||||
# This prompt is from https://github.com/kaixindelele/ChatPaper/blob/main/chat_paper.py
|
||||
file_write_buffer.extend(final_results)
|
||||
i_say, final_results = input_clipping(i_say, final_results, max_token_limit=2000)
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say,
|
||||
inputs_show_user=i_say,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history=history,
|
||||
sys_prompt="总结文章。"
|
||||
) # 带超时倒计时
|
||||
inputs=i_say, inputs_show_user='开始最终总结',
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=final_results,
|
||||
sys_prompt= f"Extract the main idea of this paper with less than {NUM_OF_WORD} Chinese characters"
|
||||
)
|
||||
final_results.append(gpt_say)
|
||||
file_write_buffer.extend([i_say, gpt_say])
|
||||
############################## <第 4 步,设置一个token上限> ##################################
|
||||
_, final_results = input_clipping("", final_results, max_token_limit=3200)
|
||||
yield from update_ui(chatbot=chatbot, history=final_results) # 注意这里的历史记录被替代了
|
||||
|
||||
chatbot[-1] = (i_say, gpt_say)
|
||||
history.append(i_say); history.append(gpt_say)
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
|
||||
res = write_results_to_file(history)
|
||||
chatbot.append(("完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
|
||||
res = write_results_to_file(file_write_buffer, file_name=gen_time_str())
|
||||
promote_file_to_downloadzone(res.split('\t')[-1], chatbot=chatbot)
|
||||
yield from update_ui(chatbot=chatbot, history=final_results) # 刷新界面
|
||||
|
||||
|
||||
@CatchException
|
||||
@@ -151,10 +137,7 @@ def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
||||
return
|
||||
|
||||
# 搜索需要处理的文件清单
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \
|
||||
# [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
|
||||
# [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
|
||||
# [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)]
|
||||
|
||||
# 如果没找到任何文件
|
||||
if len(file_manifest) == 0:
|
||||
|
||||
@@ -1,15 +1,19 @@
|
||||
from toolbox import CatchException, report_execption, write_results_to_file
|
||||
from toolbox import update_ui
|
||||
from toolbox import update_ui, promote_file_to_downloadzone, update_ui_lastest_msg, disable_auto_promotion
|
||||
from toolbox import write_history_to_file, get_log_folder
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from .crazy_utils import read_and_clean_pdf_text
|
||||
from .pdf_fns.parse_pdf import parse_pdf, get_avail_grobid_url
|
||||
from colorful import *
|
||||
import glob
|
||||
import os
|
||||
import math
|
||||
|
||||
@CatchException
|
||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port):
|
||||
import glob
|
||||
import os
|
||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
|
||||
disable_auto_promotion(chatbot)
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -30,20 +34,11 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_
|
||||
# 清空历史,以免输入溢出
|
||||
history = []
|
||||
|
||||
from .crazy_utils import get_files_from_everything
|
||||
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf')
|
||||
# 检测输入参数,如没有给定输入参数,直接退出
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "":
|
||||
txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history,
|
||||
a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# 搜索需要处理的文件清单
|
||||
file_manifest = [f for f in glob.glob(
|
||||
f'{project_folder}/**/*.pdf', recursive=True)]
|
||||
if not success:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
|
||||
# 如果没找到任何文件
|
||||
if len(file_manifest) == 0:
|
||||
@@ -53,18 +48,129 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_
|
||||
return
|
||||
|
||||
# 开始正式执行任务
|
||||
yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt)
|
||||
grobid_url = get_avail_grobid_url()
|
||||
if grobid_url is not None:
|
||||
yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url)
|
||||
else:
|
||||
yield from update_ui_lastest_msg("GROBID服务不可用,请检查config中的GROBID_URL。作为替代,现在将执行效果稍差的旧版代码。", chatbot, history, delay=3)
|
||||
yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
|
||||
def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt):
|
||||
import os
|
||||
def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url):
|
||||
import copy
|
||||
import tiktoken
|
||||
TOKEN_LIMIT_PER_FRAGMENT = 1280
|
||||
generated_conclusion_files = []
|
||||
generated_html_files = []
|
||||
DST_LANG = "中文"
|
||||
for index, fp in enumerate(file_manifest):
|
||||
chatbot.append(["当前进度:", f"正在连接GROBID服务,请稍候: {grobid_url}\n如果等待时间过长,请修改config中的GROBID_URL,可修改成本地GROBID服务。"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
article_dict = parse_pdf(fp, grobid_url)
|
||||
print(article_dict)
|
||||
prompt = "以下是一篇学术论文的基本信息:\n"
|
||||
# title
|
||||
title = article_dict.get('title', '无法获取 title'); prompt += f'title:{title}\n\n'
|
||||
# authors
|
||||
authors = article_dict.get('authors', '无法获取 authors'); prompt += f'authors:{authors}\n\n'
|
||||
# abstract
|
||||
abstract = article_dict.get('abstract', '无法获取 abstract'); prompt += f'abstract:{abstract}\n\n'
|
||||
# command
|
||||
prompt += f"请将题目和摘要翻译为{DST_LANG}。"
|
||||
meta = [f'# Title:\n\n', title, f'# Abstract:\n\n', abstract ]
|
||||
|
||||
# 单线,获取文章meta信息
|
||||
paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=prompt,
|
||||
inputs_show_user=prompt,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot, history=[],
|
||||
sys_prompt="You are an academic paper reader。",
|
||||
)
|
||||
|
||||
# 多线,翻译
|
||||
inputs_array = []
|
||||
inputs_show_user_array = []
|
||||
|
||||
# get_token_num
|
||||
from request_llm.bridge_all import model_info
|
||||
enc = model_info[llm_kwargs['llm_model']]['tokenizer']
|
||||
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||
|
||||
def break_down(txt):
|
||||
raw_token_num = get_token_num(txt)
|
||||
if raw_token_num <= TOKEN_LIMIT_PER_FRAGMENT:
|
||||
return [txt]
|
||||
else:
|
||||
# raw_token_num > TOKEN_LIMIT_PER_FRAGMENT
|
||||
# find a smooth token limit to achieve even seperation
|
||||
count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT))
|
||||
token_limit_smooth = raw_token_num // count + count
|
||||
return breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn=get_token_num, limit=token_limit_smooth)
|
||||
|
||||
for section in article_dict.get('sections'):
|
||||
if len(section['text']) == 0: continue
|
||||
section_frags = break_down(section['text'])
|
||||
for i, fragment in enumerate(section_frags):
|
||||
heading = section['heading']
|
||||
if len(section_frags) > 1: heading += f' Part-{i+1}'
|
||||
inputs_array.append(
|
||||
f"你需要翻译{heading}章节,内容如下: \n\n{fragment}"
|
||||
)
|
||||
inputs_show_user_array.append(
|
||||
f"# {heading}\n\n{fragment}"
|
||||
)
|
||||
|
||||
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array=inputs_array,
|
||||
inputs_show_user_array=inputs_show_user_array,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history_array=[meta for _ in inputs_array],
|
||||
sys_prompt_array=[
|
||||
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in inputs_array],
|
||||
)
|
||||
res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + gpt_response_collection, file_basename=None, file_fullname=None)
|
||||
promote_file_to_downloadzone(res_path, rename_file=os.path.basename(fp)+'.md', chatbot=chatbot)
|
||||
generated_conclusion_files.append(res_path)
|
||||
|
||||
ch = construct_html()
|
||||
orig = ""
|
||||
trans = ""
|
||||
gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
|
||||
for i,k in enumerate(gpt_response_collection_html):
|
||||
if i%2==0:
|
||||
gpt_response_collection_html[i] = inputs_show_user_array[i//2]
|
||||
else:
|
||||
gpt_response_collection_html[i] = gpt_response_collection_html[i]
|
||||
|
||||
final = ["", "", "一、论文概况", "", "Abstract", paper_meta_info, "二、论文翻译", ""]
|
||||
final.extend(gpt_response_collection_html)
|
||||
for i, k in enumerate(final):
|
||||
if i%2==0:
|
||||
orig = k
|
||||
if i%2==1:
|
||||
trans = k
|
||||
ch.add_row(a=orig, b=trans)
|
||||
create_report_file_name = f"{os.path.basename(fp)}.trans.html"
|
||||
html_file = ch.save_file(create_report_file_name)
|
||||
generated_html_files.append(html_file)
|
||||
promote_file_to_downloadzone(html_file, rename_file=os.path.basename(html_file), chatbot=chatbot)
|
||||
|
||||
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
||||
import copy
|
||||
TOKEN_LIMIT_PER_FRAGMENT = 1280
|
||||
generated_conclusion_files = []
|
||||
generated_html_files = []
|
||||
for index, fp in enumerate(file_manifest):
|
||||
# 读取PDF文件
|
||||
file_content, page_one = read_and_clean_pdf_text(fp)
|
||||
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
|
||||
# 递归地切割PDF文件
|
||||
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||
@@ -74,7 +180,7 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
|
||||
paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||
txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
|
||||
page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||
txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
|
||||
txt=page_one, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
|
||||
|
||||
# 为了更好的效果,我们剥离Introduction之后的部分(如果有)
|
||||
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
|
||||
@@ -100,15 +206,15 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
|
||||
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in paper_fragments],
|
||||
# max_workers=5 # OpenAI所允许的最大并行过载
|
||||
)
|
||||
|
||||
gpt_response_collection_md = copy.deepcopy(gpt_response_collection)
|
||||
# 整理报告的格式
|
||||
for i,k in enumerate(gpt_response_collection):
|
||||
for i,k in enumerate(gpt_response_collection_md):
|
||||
if i%2==0:
|
||||
gpt_response_collection[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection)//2}]: \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection)//2}]:\n "
|
||||
gpt_response_collection_md[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection_md)//2}]: \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection_md)//2}]:\n "
|
||||
else:
|
||||
gpt_response_collection[i] = gpt_response_collection[i]
|
||||
gpt_response_collection_md[i] = gpt_response_collection_md[i]
|
||||
final = ["一、论文概况\n\n---\n\n", paper_meta_info.replace('# ', '### ') + '\n\n---\n\n', "二、论文翻译", ""]
|
||||
final.extend(gpt_response_collection)
|
||||
final.extend(gpt_response_collection_md)
|
||||
create_report_file_name = f"{os.path.basename(fp)}.trans.md"
|
||||
res = write_results_to_file(final, file_name=create_report_file_name)
|
||||
|
||||
@@ -117,15 +223,87 @@ def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot,
|
||||
chatbot.append((f"{fp}完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# write html
|
||||
try:
|
||||
ch = construct_html()
|
||||
orig = ""
|
||||
trans = ""
|
||||
gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
|
||||
for i,k in enumerate(gpt_response_collection_html):
|
||||
if i%2==0:
|
||||
gpt_response_collection_html[i] = paper_fragments[i//2].replace('#', '')
|
||||
else:
|
||||
gpt_response_collection_html[i] = gpt_response_collection_html[i]
|
||||
final = ["论文概况", paper_meta_info.replace('# ', '### '), "二、论文翻译", ""]
|
||||
final.extend(gpt_response_collection_html)
|
||||
for i, k in enumerate(final):
|
||||
if i%2==0:
|
||||
orig = k
|
||||
if i%2==1:
|
||||
trans = k
|
||||
ch.add_row(a=orig, b=trans)
|
||||
create_report_file_name = f"{os.path.basename(fp)}.trans.html"
|
||||
generated_html_files.append(ch.save_file(create_report_file_name))
|
||||
except:
|
||||
from toolbox import trimmed_format_exc
|
||||
print('writing html result failed:', trimmed_format_exc())
|
||||
|
||||
# 准备文件的下载
|
||||
import shutil
|
||||
for pdf_path in generated_conclusion_files:
|
||||
# 重命名文件
|
||||
rename_file = f'./gpt_log/总结论文-{os.path.basename(pdf_path)}'
|
||||
if os.path.exists(rename_file):
|
||||
os.remove(rename_file)
|
||||
shutil.copyfile(pdf_path, rename_file)
|
||||
if os.path.exists(pdf_path):
|
||||
os.remove(pdf_path)
|
||||
chatbot.append(("给出输出文件清单", str(generated_conclusion_files)))
|
||||
rename_file = f'翻译-{os.path.basename(pdf_path)}'
|
||||
promote_file_to_downloadzone(pdf_path, rename_file=rename_file, chatbot=chatbot)
|
||||
for html_path in generated_html_files:
|
||||
# 重命名文件
|
||||
rename_file = f'翻译-{os.path.basename(html_path)}'
|
||||
promote_file_to_downloadzone(html_path, rename_file=rename_file, chatbot=chatbot)
|
||||
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
class construct_html():
|
||||
def __init__(self) -> None:
|
||||
self.css = """
|
||||
.row {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.column {
|
||||
flex: 1;
|
||||
padding: 10px;
|
||||
}
|
||||
|
||||
.table-header {
|
||||
font-weight: bold;
|
||||
border-bottom: 1px solid black;
|
||||
}
|
||||
|
||||
.table-row {
|
||||
border-bottom: 1px solid lightgray;
|
||||
}
|
||||
|
||||
.table-cell {
|
||||
padding: 5px;
|
||||
}
|
||||
"""
|
||||
self.html_string = f'<!DOCTYPE html><head><meta charset="utf-8"><title>翻译结果</title><style>{self.css}</style></head>'
|
||||
|
||||
|
||||
def add_row(self, a, b):
|
||||
tmp = """
|
||||
<div class="row table-row">
|
||||
<div class="column table-cell">REPLACE_A</div>
|
||||
<div class="column table-cell">REPLACE_B</div>
|
||||
</div>
|
||||
"""
|
||||
from toolbox import markdown_convertion
|
||||
tmp = tmp.replace('REPLACE_A', markdown_convertion(a))
|
||||
tmp = tmp.replace('REPLACE_B', markdown_convertion(b))
|
||||
self.html_string += tmp
|
||||
|
||||
|
||||
def save_file(self, file_name):
|
||||
with open(os.path.join(get_log_folder(), file_name), 'w', encoding='utf8') as f:
|
||||
f.write(self.html_string.encode('utf-8', 'ignore').decode())
|
||||
return os.path.join(get_log_folder(), file_name)
|
||||
|
||||
187
crazy_functions/数学动画生成manim.py
Normal file
187
crazy_functions/数学动画生成manim.py
Normal file
@@ -0,0 +1,187 @@
|
||||
from toolbox import CatchException, update_ui, gen_time_str
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from .crazy_utils import input_clipping
|
||||
|
||||
def inspect_dependency(chatbot, history):
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import manim
|
||||
return True
|
||||
except:
|
||||
chatbot.append(["导入依赖失败", "使用该模块需要额外依赖,安装方法:```pip install manim manimgl```"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return False
|
||||
|
||||
def eval_manim(code):
|
||||
import subprocess, sys, os, shutil
|
||||
|
||||
with open('gpt_log/MyAnimation.py', 'w', encoding='utf8') as f:
|
||||
f.write(code)
|
||||
|
||||
def get_class_name(class_string):
|
||||
import re
|
||||
# Use regex to extract the class name
|
||||
class_name = re.search(r'class (\w+)\(', class_string).group(1)
|
||||
return class_name
|
||||
|
||||
class_name = get_class_name(code)
|
||||
|
||||
try:
|
||||
subprocess.check_output([sys.executable, '-c', f"from gpt_log.MyAnimation import {class_name}; {class_name}().render()"])
|
||||
shutil.move('media/videos/1080p60/{class_name}.mp4', f'gpt_log/{class_name}-{gen_time_str()}.mp4')
|
||||
return f'gpt_log/{gen_time_str()}.mp4'
|
||||
except subprocess.CalledProcessError as e:
|
||||
output = e.output.decode()
|
||||
print(f"Command returned non-zero exit status {e.returncode}: {output}.")
|
||||
return f"Evaluating python script failed: {e.output}."
|
||||
except:
|
||||
print('generating mp4 failed')
|
||||
return "Generating mp4 failed."
|
||||
|
||||
|
||||
def get_code_block(reply):
|
||||
import re
|
||||
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
|
||||
matches = re.findall(pattern, reply) # find all code blocks in text
|
||||
if len(matches) != 1:
|
||||
raise RuntimeError("GPT is not generating proper code.")
|
||||
return matches[0].strip('python') # code block
|
||||
|
||||
@CatchException
|
||||
def 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
# 清空历史,以免输入溢出
|
||||
history = []
|
||||
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"生成数学动画, 此插件处于开发阶段, 建议暂时不要使用, 作者: binary-husky, 插件初始化中 ..."
|
||||
])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 尝试导入依赖, 如果缺少依赖, 则给出安装建议
|
||||
dep_ok = yield from inspect_dependency(chatbot=chatbot, history=history) # 刷新界面
|
||||
if not dep_ok: return
|
||||
|
||||
# 输入
|
||||
i_say = f'Generate a animation to show: ' + txt
|
||||
demo = ["Here is some examples of manim", examples_of_manim()]
|
||||
_, demo = input_clipping(inputs="", history=demo, max_token_limit=2560)
|
||||
# 开始
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say, inputs_show_user=i_say,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo,
|
||||
sys_prompt=
|
||||
r"Write a animation script with 3blue1brown's manim. "+
|
||||
r"Please begin with `from manim import *`. " +
|
||||
r"Answer me with a code block wrapped by ```."
|
||||
)
|
||||
chatbot.append(["开始生成动画", "..."])
|
||||
history.extend([i_say, gpt_say])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||
|
||||
# 将代码转为动画
|
||||
code = get_code_block(gpt_say)
|
||||
res = eval_manim(code)
|
||||
|
||||
chatbot.append(("生成的视频文件路径", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||
|
||||
# 在这里放一些网上搜集的demo,辅助gpt生成代码
|
||||
def examples_of_manim():
|
||||
return r"""
|
||||
|
||||
|
||||
```
|
||||
|
||||
class MovingGroupToDestination(Scene):
|
||||
def construct(self):
|
||||
group = VGroup(Dot(LEFT), Dot(ORIGIN), Dot(RIGHT, color=RED), Dot(2 * RIGHT)).scale(1.4)
|
||||
dest = Dot([4, 3, 0], color=YELLOW)
|
||||
self.add(group, dest)
|
||||
self.play(group.animate.shift(dest.get_center() - group[2].get_center()))
|
||||
self.wait(0.5)
|
||||
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
|
||||
class LatexWithMovingFramebox(Scene):
|
||||
def construct(self):
|
||||
text=MathTex(
|
||||
"\\frac{d}{dx}f(x)g(x)=","f(x)\\frac{d}{dx}g(x)","+",
|
||||
"g(x)\\frac{d}{dx}f(x)"
|
||||
)
|
||||
self.play(Write(text))
|
||||
framebox1 = SurroundingRectangle(text[1], buff = .1)
|
||||
framebox2 = SurroundingRectangle(text[3], buff = .1)
|
||||
self.play(
|
||||
Create(framebox1),
|
||||
)
|
||||
self.wait()
|
||||
self.play(
|
||||
ReplacementTransform(framebox1,framebox2),
|
||||
)
|
||||
self.wait()
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
||||
class PointWithTrace(Scene):
|
||||
def construct(self):
|
||||
path = VMobject()
|
||||
dot = Dot()
|
||||
path.set_points_as_corners([dot.get_center(), dot.get_center()])
|
||||
def update_path(path):
|
||||
previous_path = path.copy()
|
||||
previous_path.add_points_as_corners([dot.get_center()])
|
||||
path.become(previous_path)
|
||||
path.add_updater(update_path)
|
||||
self.add(path, dot)
|
||||
self.play(Rotating(dot, radians=PI, about_point=RIGHT, run_time=2))
|
||||
self.wait()
|
||||
self.play(dot.animate.shift(UP))
|
||||
self.play(dot.animate.shift(LEFT))
|
||||
self.wait()
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
# do not use get_graph, this funciton is deprecated
|
||||
|
||||
class ExampleFunctionGraph(Scene):
|
||||
def construct(self):
|
||||
cos_func = FunctionGraph(
|
||||
lambda t: np.cos(t) + 0.5 * np.cos(7 * t) + (1 / 7) * np.cos(14 * t),
|
||||
color=RED,
|
||||
)
|
||||
|
||||
sin_func_1 = FunctionGraph(
|
||||
lambda t: np.sin(t) + 0.5 * np.sin(7 * t) + (1 / 7) * np.sin(14 * t),
|
||||
color=BLUE,
|
||||
)
|
||||
|
||||
sin_func_2 = FunctionGraph(
|
||||
lambda t: np.sin(t) + 0.5 * np.sin(7 * t) + (1 / 7) * np.sin(14 * t),
|
||||
x_range=[-4, 4],
|
||||
color=GREEN,
|
||||
).move_to([0, 1, 0])
|
||||
|
||||
self.add(cos_func, sin_func_1, sin_func_2)
|
||||
|
||||
```
|
||||
"""
|
||||
@@ -13,6 +13,8 @@ def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_pro
|
||||
# 递归地切割PDF文件,每一块(尽量是完整的一个section,比如introduction,experiment等,必要时再进行切割)
|
||||
# 的长度必须小于 2500 个 Token
|
||||
file_content, page_one = read_and_clean_pdf_text(file_name) # (尝试)按照章节切割PDF
|
||||
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
|
||||
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
||||
|
||||
|
||||
@@ -75,7 +75,11 @@ def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
proxies, = get_conf('proxies')
|
||||
urls = google(txt, proxies)
|
||||
history = []
|
||||
|
||||
if len(urls) == 0:
|
||||
chatbot.append((f"结论:{txt}",
|
||||
"[Local Message] 受到google限制,无法从google获取信息!"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
return
|
||||
# ------------- < 第2步:依次访问网页 > -------------
|
||||
max_search_result = 5 # 最多收纳多少个网页的结果
|
||||
for index, url in enumerate(urls[:max_search_result]):
|
||||
|
||||
106
crazy_functions/联网的ChatGPT_bing版.py
Normal file
106
crazy_functions/联网的ChatGPT_bing版.py
Normal file
@@ -0,0 +1,106 @@
|
||||
from toolbox import CatchException, update_ui
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
from request_llm.bridge_all import model_info
|
||||
|
||||
|
||||
def bing_search(query, proxies=None):
|
||||
query = query
|
||||
url = f"https://cn.bing.com/search?q={query}"
|
||||
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36'}
|
||||
response = requests.get(url, headers=headers, proxies=proxies)
|
||||
soup = BeautifulSoup(response.content, 'html.parser')
|
||||
results = []
|
||||
for g in soup.find_all('li', class_='b_algo'):
|
||||
anchors = g.find_all('a')
|
||||
if anchors:
|
||||
link = anchors[0]['href']
|
||||
if not link.startswith('http'):
|
||||
continue
|
||||
title = g.find('h2').text
|
||||
item = {'title': title, 'link': link}
|
||||
results.append(item)
|
||||
|
||||
for r in results:
|
||||
print(r['link'])
|
||||
return results
|
||||
|
||||
|
||||
def scrape_text(url, proxies) -> str:
|
||||
"""Scrape text from a webpage
|
||||
|
||||
Args:
|
||||
url (str): The URL to scrape text from
|
||||
|
||||
Returns:
|
||||
str: The scraped text
|
||||
"""
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
|
||||
'Content-Type': 'text/plain',
|
||||
}
|
||||
try:
|
||||
response = requests.get(url, headers=headers, proxies=proxies, timeout=8)
|
||||
if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
|
||||
except:
|
||||
return "无法连接到该网页"
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
for script in soup(["script", "style"]):
|
||||
script.extract()
|
||||
text = soup.get_text()
|
||||
lines = (line.strip() for line in text.splitlines())
|
||||
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
|
||||
text = "\n".join(chunk for chunk in chunks if chunk)
|
||||
return text
|
||||
|
||||
@CatchException
|
||||
def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
|
||||
"[Local Message] 请注意,您正在调用一个[函数插件]的模板,该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。您若希望分享新的功能模组,请不吝PR!"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
|
||||
# ------------- < 第1步:爬取搜索引擎的结果 > -------------
|
||||
from toolbox import get_conf
|
||||
proxies, = get_conf('proxies')
|
||||
urls = bing_search(txt, proxies)
|
||||
history = []
|
||||
if len(urls) == 0:
|
||||
chatbot.append((f"结论:{txt}",
|
||||
"[Local Message] 受到bing限制,无法从bing获取信息!"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
return
|
||||
# ------------- < 第2步:依次访问网页 > -------------
|
||||
max_search_result = 8 # 最多收纳多少个网页的结果
|
||||
for index, url in enumerate(urls[:max_search_result]):
|
||||
res = scrape_text(url['link'], proxies)
|
||||
history.extend([f"第{index}份搜索结果:", res])
|
||||
chatbot.append([f"第{index}份搜索结果:", res[:500]+"......"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
|
||||
# ------------- < 第3步:ChatGPT综合 > -------------
|
||||
i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}"
|
||||
i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token
|
||||
inputs=i_say,
|
||||
history=history,
|
||||
max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4
|
||||
)
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say, inputs_show_user=i_say,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
||||
sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。"
|
||||
)
|
||||
chatbot[-1] = (i_say, gpt_say)
|
||||
history.append(i_say);history.append(gpt_say)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||
|
||||
179
crazy_functions/虚空终端.py
Normal file
179
crazy_functions/虚空终端.py
Normal file
@@ -0,0 +1,179 @@
|
||||
"""
|
||||
Explanation of the Void Terminal Plugin:
|
||||
|
||||
Please describe in natural language what you want to do.
|
||||
|
||||
1. You can open the plugin's dropdown menu to explore various capabilities of this project, and then describe your needs in natural language, for example:
|
||||
- "Please call the plugin to translate a PDF paper for me. I just uploaded the paper to the upload area."
|
||||
- "Please use the plugin to translate a PDF paper, with the address being https://www.nature.com/articles/s41586-019-1724-z.pdf."
|
||||
- "Generate an image with blooming flowers and lush green grass using the plugin."
|
||||
- "Translate the README using the plugin. The GitHub URL is https://github.com/facebookresearch/co-tracker."
|
||||
- "Translate an Arxiv paper for me. The Arxiv ID is 1812.10695. Remember to use the plugin and don't do it manually!"
|
||||
- "I don't like the current interface color. Modify the configuration and change the theme to THEME="High-Contrast"."
|
||||
- "Could you please explain the structure of the Transformer network?"
|
||||
|
||||
2. If you use keywords like "call the plugin xxx", "modify the configuration xxx", "please", etc., your intention can be recognized more accurately.
|
||||
|
||||
3. Your intention can be recognized more accurately when using powerful models like GPT4. This plugin is relatively new, so please feel free to provide feedback on GitHub.
|
||||
|
||||
4. Now, if you need to process a file, please upload the file (drag the file to the file upload area) or describe the path to the file.
|
||||
|
||||
5. If you don't need to upload a file, you can simply repeat your command again.
|
||||
"""
|
||||
explain_msg = """
|
||||
## 虚空终端插件说明:
|
||||
|
||||
1. 请用**自然语言**描述您需要做什么。例如:
|
||||
- 「请调用插件,为我翻译PDF论文,论文我刚刚放到上传区了。」
|
||||
- 「请调用插件翻译PDF论文,地址为https://www.nature.com/articles/s41586-019-1724-z.pdf」
|
||||
- 「生成一张图片,图中鲜花怒放,绿草如茵,用插件实现。」
|
||||
- 「用插件翻译README,Github网址是https://github.com/facebookresearch/co-tracker」
|
||||
- 「给爷翻译Arxiv论文,arxiv论文的ID是1812.10695,记得用插件,不要自己瞎搞!」
|
||||
- 「我不喜欢当前的界面颜色,修改配置,把主题THEME更换为THEME="High-Contrast"。」
|
||||
- 「请问Transformer网络的结构是怎样的?」
|
||||
|
||||
2. 您可以打开插件下拉菜单以了解本项目的各种能力。
|
||||
|
||||
3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词,您的意图可以被识别的更准确。
|
||||
|
||||
4. 建议使用 GPT3.5 或更强的模型,弱模型可能无法理解您的想法。该插件诞生时间不长,欢迎您前往Github反馈问题。
|
||||
|
||||
5. 现在,如果需要处理文件,请您上传文件(将文件拖动到文件上传区),或者描述文件所在的路径。
|
||||
|
||||
6. 如果不需要上传文件,现在您只需要再次重复一次您的指令即可。
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List
|
||||
from toolbox import CatchException, update_ui, gen_time_str
|
||||
from toolbox import update_ui_lastest_msg, disable_auto_promotion
|
||||
from request_llm.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from crazy_functions.crazy_utils import input_clipping
|
||||
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
|
||||
from crazy_functions.vt_fns.vt_state import VoidTerminalState
|
||||
from crazy_functions.vt_fns.vt_modify_config import modify_configuration_hot
|
||||
from crazy_functions.vt_fns.vt_modify_config import modify_configuration_reboot
|
||||
from crazy_functions.vt_fns.vt_call_plugin import execute_plugin
|
||||
|
||||
class UserIntention(BaseModel):
|
||||
user_prompt: str = Field(description="the content of user input", default="")
|
||||
intention_type: str = Field(description="the type of user intention, choose from ['ModifyConfiguration', 'ExecutePlugin', 'Chat']", default="ExecutePlugin")
|
||||
user_provide_file: bool = Field(description="whether the user provides a path to a file", default=False)
|
||||
user_provide_url: bool = Field(description="whether the user provides a url", default=False)
|
||||
|
||||
|
||||
def chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=txt, inputs_show_user=txt,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||
sys_prompt=system_prompt
|
||||
)
|
||||
chatbot[-1] = [txt, gpt_say]
|
||||
history.extend([txt, gpt_say])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
pass
|
||||
|
||||
|
||||
explain_intention_to_user = {
|
||||
'Chat': "聊天对话",
|
||||
'ExecutePlugin': "调用插件",
|
||||
'ModifyConfiguration': "修改配置",
|
||||
}
|
||||
|
||||
|
||||
def analyze_intention_with_simple_rules(txt):
|
||||
user_intention = UserIntention()
|
||||
user_intention.user_prompt = txt
|
||||
is_certain = False
|
||||
|
||||
if '请问' in txt:
|
||||
is_certain = True
|
||||
user_intention.intention_type = 'Chat'
|
||||
|
||||
if '用插件' in txt:
|
||||
is_certain = True
|
||||
user_intention.intention_type = 'ExecutePlugin'
|
||||
|
||||
if '修改配置' in txt:
|
||||
is_certain = True
|
||||
user_intention.intention_type = 'ModifyConfiguration'
|
||||
|
||||
return is_certain, user_intention
|
||||
|
||||
|
||||
@CatchException
|
||||
def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
disable_auto_promotion(chatbot=chatbot)
|
||||
# 获取当前虚空终端状态
|
||||
state = VoidTerminalState.get_state(chatbot)
|
||||
appendix_msg = ""
|
||||
|
||||
# 用简单的关键词检测用户意图
|
||||
is_certain, _ = analyze_intention_with_simple_rules(txt)
|
||||
if txt.startswith('private_upload/') and len(txt) == 34:
|
||||
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=False)
|
||||
appendix_msg = "\n\n**很好,您已经上传了文件**,现在请您描述您的需求。"
|
||||
|
||||
if is_certain or (state.has_provided_explaination):
|
||||
# 如果意图明确,跳过提示环节
|
||||
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
|
||||
state.unlock_plugin(chatbot=chatbot)
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
|
||||
return
|
||||
else:
|
||||
# 如果意图模糊,提示
|
||||
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
|
||||
state.lock_plugin(chatbot=chatbot)
|
||||
chatbot.append(("虚空终端状态:", explain_msg+appendix_msg))
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
return
|
||||
|
||||
|
||||
|
||||
def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
history = []
|
||||
chatbot.append(("虚空终端状态: ", f"正在执行任务: {txt}"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# ⭐ ⭐ ⭐ 分析用户意图
|
||||
is_certain, user_intention = analyze_intention_with_simple_rules(txt)
|
||||
if not is_certain:
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n分析用户意图中", chatbot=chatbot, history=history, delay=0)
|
||||
gpt_json_io = GptJsonIO(UserIntention)
|
||||
rf_req = "\nchoose from ['ModifyConfiguration', 'ExecutePlugin', 'Chat']"
|
||||
inputs = "Analyze the intention of the user according to following user input: \n\n" + \
|
||||
">> " + (txt+rf_req).rstrip('\n').replace('\n','\n>> ') + '\n\n' + gpt_json_io.format_instructions
|
||||
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
|
||||
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
|
||||
analyze_res = run_gpt_fn(inputs, "")
|
||||
try:
|
||||
user_intention = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
|
||||
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
|
||||
except JsonStringError as e:
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 失败 当前语言模型({llm_kwargs['llm_model']})不能理解您的意图", chatbot=chatbot, history=history, delay=0)
|
||||
return
|
||||
else:
|
||||
pass
|
||||
|
||||
yield from update_ui_lastest_msg(
|
||||
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
|
||||
chatbot=chatbot, history=history, delay=0)
|
||||
|
||||
# 用户意图: 修改本项目的配置
|
||||
if user_intention.intention_type == 'ModifyConfiguration':
|
||||
yield from modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
|
||||
|
||||
# 用户意图: 调度插件
|
||||
if user_intention.intention_type == 'ExecutePlugin':
|
||||
yield from execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
|
||||
|
||||
# 用户意图: 聊天
|
||||
if user_intention.intention_type == 'Chat':
|
||||
yield from chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
|
||||
|
||||
return
|
||||
|
||||
146
crazy_functions/解析JupyterNotebook.py
Normal file
146
crazy_functions/解析JupyterNotebook.py
Normal file
@@ -0,0 +1,146 @@
|
||||
from toolbox import update_ui
|
||||
from toolbox import CatchException, report_execption, write_results_to_file
|
||||
fast_debug = True
|
||||
|
||||
|
||||
class PaperFileGroup():
|
||||
def __init__(self):
|
||||
self.file_paths = []
|
||||
self.file_contents = []
|
||||
self.sp_file_contents = []
|
||||
self.sp_file_index = []
|
||||
self.sp_file_tag = []
|
||||
|
||||
# count_token
|
||||
from request_llm.bridge_all import model_info
|
||||
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||
def get_token_num(txt): return len(
|
||||
enc.encode(txt, disallowed_special=()))
|
||||
self.get_token_num = get_token_num
|
||||
|
||||
def run_file_split(self, max_token_limit=1900):
|
||||
"""
|
||||
将长文本分离开来
|
||||
"""
|
||||
for index, file_content in enumerate(self.file_contents):
|
||||
if self.get_token_num(file_content) < max_token_limit:
|
||||
self.sp_file_contents.append(file_content)
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index])
|
||||
else:
|
||||
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
||||
file_content, self.get_token_num, max_token_limit)
|
||||
for j, segment in enumerate(segments):
|
||||
self.sp_file_contents.append(segment)
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(
|
||||
self.file_paths[index] + f".part-{j}.txt")
|
||||
|
||||
|
||||
|
||||
def parseNotebook(filename, enable_markdown=1):
|
||||
import json
|
||||
|
||||
CodeBlocks = []
|
||||
with open(filename, 'r', encoding='utf-8', errors='replace') as f:
|
||||
notebook = json.load(f)
|
||||
for cell in notebook['cells']:
|
||||
if cell['cell_type'] == 'code' and cell['source']:
|
||||
# remove blank lines
|
||||
cell['source'] = [line for line in cell['source'] if line.strip()
|
||||
!= '']
|
||||
CodeBlocks.append("".join(cell['source']))
|
||||
elif enable_markdown and cell['cell_type'] == 'markdown' and cell['source']:
|
||||
cell['source'] = [line for line in cell['source'] if line.strip()
|
||||
!= '']
|
||||
CodeBlocks.append("Markdown:"+"".join(cell['source']))
|
||||
|
||||
Code = ""
|
||||
for idx, code in enumerate(CodeBlocks):
|
||||
Code += f"This is {idx+1}th code block: \n"
|
||||
Code += code+"\n"
|
||||
|
||||
return Code
|
||||
|
||||
|
||||
def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
enable_markdown = plugin_kwargs.get("advanced_arg", "1")
|
||||
try:
|
||||
enable_markdown = int(enable_markdown)
|
||||
except ValueError:
|
||||
enable_markdown = 1
|
||||
|
||||
pfg = PaperFileGroup()
|
||||
|
||||
for fp in file_manifest:
|
||||
file_content = parseNotebook(fp, enable_markdown=enable_markdown)
|
||||
pfg.file_paths.append(fp)
|
||||
pfg.file_contents.append(file_content)
|
||||
|
||||
# <-------- 拆分过长的IPynb文件 ---------->
|
||||
pfg.run_file_split(max_token_limit=1024)
|
||||
n_split = len(pfg.sp_file_contents)
|
||||
|
||||
inputs_array = [r"This is a Jupyter Notebook file, tell me about Each Block in Chinese. Focus Just On Code." +
|
||||
r"If a block starts with `Markdown` which means it's a markdown block in ipynbipynb. " +
|
||||
r"Start a new line for a block and block num use Chinese." +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_show_user_array = [f"{f}的分析如下" for f in pfg.sp_file_tag]
|
||||
sys_prompt_array = ["You are a professional programmer."] * n_split
|
||||
|
||||
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array=inputs_array,
|
||||
inputs_show_user_array=inputs_show_user_array,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history_array=[[""] for _ in range(n_split)],
|
||||
sys_prompt_array=sys_prompt_array,
|
||||
# max_workers=5, # OpenAI所允许的最大并行过载
|
||||
scroller_max_len=80
|
||||
)
|
||||
|
||||
# <-------- 整理结果,退出 ---------->
|
||||
block_result = " \n".join(gpt_response_collection)
|
||||
chatbot.append(("解析的结果如下", block_result))
|
||||
history.extend(["解析的结果如下", block_result])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# <-------- 写入文件,退出 ---------->
|
||||
res = write_results_to_file(history)
|
||||
chatbot.append(("完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
@CatchException
|
||||
def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"对IPynb文件进行解析。Contributor: codycjy."])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
history = [] # 清空历史
|
||||
import glob
|
||||
import os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "":
|
||||
txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history,
|
||||
a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
if txt.endswith('.ipynb'):
|
||||
file_manifest = [txt]
|
||||
else:
|
||||
file_manifest = [f for f in glob.glob(
|
||||
f'{project_folder}/**/*.ipynb', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history,
|
||||
a=f"解析项目: {txt}", b=f"找不到任何.ipynb文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, )
|
||||
@@ -1,11 +1,13 @@
|
||||
from toolbox import update_ui
|
||||
from toolbox import CatchException, report_execption, write_results_to_file
|
||||
from .crazy_utils import input_clipping
|
||||
|
||||
def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
||||
import os, copy
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
msg = '正常'
|
||||
summary_batch_isolation = True
|
||||
inputs_array = []
|
||||
inputs_show_user_array = []
|
||||
history_array = []
|
||||
@@ -58,20 +60,38 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
# 把“请对下面的程序文件做一个概述” 替换成 精简的 "文件名:{all_file[index]}"
|
||||
for index, content in enumerate(this_iteration_gpt_response_collection):
|
||||
if index%2==0: this_iteration_gpt_response_collection[index] = f"{file_rel_path[index//2]}" # 只保留文件名节省token
|
||||
previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
|
||||
this_iteration_files = [os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)]
|
||||
previous_iteration_files.extend(this_iteration_files)
|
||||
previous_iteration_files_string = ', '.join(previous_iteration_files)
|
||||
current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
|
||||
i_say = f'根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括{previous_iteration_files_string})。'
|
||||
current_iteration_focus = ', '.join(this_iteration_files)
|
||||
if summary_batch_isolation: focus = current_iteration_focus
|
||||
else: focus = previous_iteration_files_string
|
||||
i_say = f'用一张Markdown表格简要描述以下文件的功能:{focus}。根据以上分析,用一句话概括程序的整体功能。'
|
||||
if last_iteration_result != "":
|
||||
sys_prompt_additional = "已知某些代码的局部作用是:" + last_iteration_result + "\n请继续分析其他源代码,从而更全面地理解项目的整体功能。"
|
||||
else:
|
||||
sys_prompt_additional = ""
|
||||
inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
|
||||
this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
|
||||
this_iteration_history.append(last_iteration_result)
|
||||
# 裁剪input
|
||||
inputs, this_iteration_history_feed = input_clipping(inputs=i_say, history=this_iteration_history, max_token_limit=2560)
|
||||
result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
|
||||
history=this_iteration_history, # 迭代之前的分析
|
||||
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
|
||||
report_part_2.extend([i_say, result])
|
||||
last_iteration_result = result
|
||||
inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
|
||||
history=this_iteration_history_feed, # 迭代之前的分析
|
||||
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
|
||||
|
||||
summary = "请用一句话概括这些文件的整体功能"
|
||||
summary_result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=summary,
|
||||
inputs_show_user=summary,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history=[i_say, result], # 迭代之前的分析
|
||||
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
|
||||
|
||||
report_part_2.extend([i_say, result])
|
||||
last_iteration_result = summary_result
|
||||
file_manifest = file_manifest[batchsize:]
|
||||
gpt_response_collection = gpt_response_collection[batchsize*2:]
|
||||
|
||||
@@ -180,7 +200,7 @@ def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析一个Rect项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
@@ -194,9 +214,15 @@ def 解析一个Rect项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.tsx', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.json', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.js', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.vue', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.less', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.sass', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.wxml', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.wxss', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.css', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.jsx', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何Rect文件: {txt}")
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何前端相关文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
@@ -223,6 +249,25 @@ def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.rs', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.toml', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.lock', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何golang文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
@@ -264,3 +309,44 @@ def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
txt_pattern = plugin_kwargs.get("advanced_arg")
|
||||
txt_pattern = txt_pattern.replace(",", ",")
|
||||
# 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
|
||||
pattern_include = [_.lstrip(" ,").rstrip(" ,") for _ in txt_pattern.split(",") if _ != "" and not _.strip().startswith("^")]
|
||||
if not pattern_include: pattern_include = ["*"] # 不输入即全部匹配
|
||||
# 将要忽略匹配的文件后缀(例如: ^*.c, ^*.cpp, ^*.py)
|
||||
pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")]
|
||||
pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件
|
||||
# 将要忽略匹配的文件名(例如: ^README.md)
|
||||
pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", "\.") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")]
|
||||
# 生成正则表达式
|
||||
pattern_except = '/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$'
|
||||
pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else ''
|
||||
|
||||
history.clear()
|
||||
import glob, os, re
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
# 若上传压缩文件, 先寻找到解压的文件夹路径, 从而避免解析压缩文件
|
||||
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
|
||||
if len(maybe_dir)>0 and maybe_dir[0].endswith('.extract'):
|
||||
extract_folder_path = maybe_dir[0]
|
||||
else:
|
||||
extract_folder_path = project_folder
|
||||
# 按输入的匹配模式寻找上传的非压缩文件和已解压的文件
|
||||
file_manifest = [f for pattern in pattern_include for f in glob.glob(f'{extract_folder_path}/**/{pattern}', recursive=True) if "" != extract_folder_path and \
|
||||
os.path.isfile(f) and (not re.search(pattern_except, f) or pattern.endswith('.' + re.search(pattern_except, f).group().split('.')[-1]))]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
@@ -6,7 +6,7 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
@@ -28,3 +28,35 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
||||
history.append(txt)
|
||||
history.append(gpt_say)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||
|
||||
|
||||
@CatchException
|
||||
def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
web_port 当前软件运行的端口号
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
# llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
|
||||
llm_kwargs['llm_model'] = plugin_kwargs.get("advanced_arg", 'chatglm&gpt-3.5-turbo') # 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
|
||||
|
||||
chatbot.append((txt, f"正在同时咨询{llm_kwargs['llm_model']}"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=txt, inputs_show_user=txt,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
||||
sys_prompt=system_prompt,
|
||||
retry_times_at_unknown_error=0
|
||||
)
|
||||
|
||||
history.append(txt)
|
||||
history.append(gpt_say)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||
195
crazy_functions/语音助手.py
Normal file
195
crazy_functions/语音助手.py
Normal file
@@ -0,0 +1,195 @@
|
||||
from toolbox import update_ui
|
||||
from toolbox import CatchException, get_conf, markdown_convertion
|
||||
from crazy_functions.crazy_utils import input_clipping
|
||||
from request_llm.bridge_all import predict_no_ui_long_connection
|
||||
import threading, time
|
||||
import numpy as np
|
||||
from .live_audio.aliyunASR import AliyunASR
|
||||
import json
|
||||
|
||||
class WatchDog():
|
||||
def __init__(self, timeout, bark_fn, interval=3, msg="") -> None:
|
||||
self.last_feed = None
|
||||
self.timeout = timeout
|
||||
self.bark_fn = bark_fn
|
||||
self.interval = interval
|
||||
self.msg = msg
|
||||
self.kill_dog = False
|
||||
|
||||
def watch(self):
|
||||
while True:
|
||||
if self.kill_dog: break
|
||||
if time.time() - self.last_feed > self.timeout:
|
||||
if len(self.msg) > 0: print(self.msg)
|
||||
self.bark_fn()
|
||||
break
|
||||
time.sleep(self.interval)
|
||||
|
||||
def begin_watch(self):
|
||||
self.last_feed = time.time()
|
||||
th = threading.Thread(target=self.watch)
|
||||
th.daemon = True
|
||||
th.start()
|
||||
|
||||
def feed(self):
|
||||
self.last_feed = time.time()
|
||||
|
||||
def chatbot2history(chatbot):
|
||||
history = []
|
||||
for c in chatbot:
|
||||
for q in c:
|
||||
if q not in ["[请讲话]", "[等待GPT响应]", "[正在等您说完问题]"]:
|
||||
history.append(q.strip('<div class="markdown-body">').strip('</div>').strip('<p>').strip('</p>'))
|
||||
return history
|
||||
|
||||
class AsyncGptTask():
|
||||
def __init__(self) -> None:
|
||||
self.observe_future = []
|
||||
self.observe_future_chatbot_index = []
|
||||
|
||||
def gpt_thread_worker(self, i_say, llm_kwargs, history, sys_prompt, observe_window, index):
|
||||
try:
|
||||
MAX_TOKEN_ALLO = 2560
|
||||
i_say, history = input_clipping(i_say, history, max_token_limit=MAX_TOKEN_ALLO)
|
||||
gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=history, sys_prompt=sys_prompt,
|
||||
observe_window=observe_window[index], console_slience=True)
|
||||
except ConnectionAbortedError as token_exceed_err:
|
||||
print('至少一个线程任务Token溢出而失败', e)
|
||||
except Exception as e:
|
||||
print('至少一个线程任务意外失败', e)
|
||||
|
||||
def add_async_gpt_task(self, i_say, chatbot_index, llm_kwargs, history, system_prompt):
|
||||
self.observe_future.append([""])
|
||||
self.observe_future_chatbot_index.append(chatbot_index)
|
||||
cur_index = len(self.observe_future)-1
|
||||
th_new = threading.Thread(target=self.gpt_thread_worker, args=(i_say, llm_kwargs, history, system_prompt, self.observe_future, cur_index))
|
||||
th_new.daemon = True
|
||||
th_new.start()
|
||||
|
||||
def update_chatbot(self, chatbot):
|
||||
for of, ofci in zip(self.observe_future, self.observe_future_chatbot_index):
|
||||
try:
|
||||
chatbot[ofci] = list(chatbot[ofci])
|
||||
chatbot[ofci][1] = markdown_convertion(of[0])
|
||||
except:
|
||||
self.observe_future = []
|
||||
self.observe_future_chatbot_index = []
|
||||
return chatbot
|
||||
|
||||
class InterviewAssistant(AliyunASR):
|
||||
def __init__(self):
|
||||
self.capture_interval = 0.5 # second
|
||||
self.stop = False
|
||||
self.parsed_text = ""
|
||||
self.parsed_sentence = ""
|
||||
self.buffered_sentence = ""
|
||||
self.event_on_result_chg = threading.Event()
|
||||
self.event_on_entence_end = threading.Event()
|
||||
self.event_on_commit_question = threading.Event()
|
||||
|
||||
def __del__(self):
|
||||
self.stop = True
|
||||
self.stop_msg = ""
|
||||
self.commit_wd.kill_dog = True
|
||||
self.plugin_wd.kill_dog = True
|
||||
|
||||
def init(self, chatbot):
|
||||
# 初始化音频采集线程
|
||||
self.captured_audio = np.array([])
|
||||
self.keep_latest_n_second = 10
|
||||
self.commit_after_pause_n_second = 2.0
|
||||
self.ready_audio_flagment = None
|
||||
self.stop = False
|
||||
self.plugin_wd = WatchDog(timeout=5, bark_fn=self.__del__, msg="程序终止")
|
||||
self.aut = threading.Thread(target=self.audio_convertion_thread, args=(chatbot._cookies['uuid'],))
|
||||
self.aut.daemon = True
|
||||
self.aut.start()
|
||||
# th2 = threading.Thread(target=self.audio2txt_thread, args=(chatbot._cookies['uuid'],))
|
||||
# th2.daemon = True
|
||||
# th2.start()
|
||||
|
||||
def no_audio_for_a_while(self):
|
||||
if len(self.buffered_sentence) < 7: # 如果一句话小于7个字,暂不提交
|
||||
self.commit_wd.begin_watch()
|
||||
else:
|
||||
self.event_on_commit_question.set()
|
||||
|
||||
def begin(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
||||
# main plugin function
|
||||
self.init(chatbot)
|
||||
chatbot.append(["[请讲话]", "[正在等您说完问题]"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
self.plugin_wd.begin_watch()
|
||||
self.agt = AsyncGptTask()
|
||||
self.commit_wd = WatchDog(timeout=self.commit_after_pause_n_second, bark_fn=self.no_audio_for_a_while, interval=0.2)
|
||||
self.commit_wd.begin_watch()
|
||||
|
||||
while not self.stop:
|
||||
self.event_on_result_chg.wait(timeout=0.25) # run once every 0.25 second
|
||||
chatbot = self.agt.update_chatbot(chatbot) # 将子线程的gpt结果写入chatbot
|
||||
history = chatbot2history(chatbot)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
self.plugin_wd.feed()
|
||||
|
||||
if self.event_on_result_chg.is_set():
|
||||
# update audio decode result
|
||||
self.event_on_result_chg.clear()
|
||||
chatbot[-1] = list(chatbot[-1])
|
||||
chatbot[-1][0] = self.buffered_sentence + self.parsed_text
|
||||
history = chatbot2history(chatbot)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
self.commit_wd.feed()
|
||||
|
||||
if self.event_on_entence_end.is_set():
|
||||
# called when a sentence has ended
|
||||
self.event_on_entence_end.clear()
|
||||
self.parsed_text = self.parsed_sentence
|
||||
self.buffered_sentence += self.parsed_sentence
|
||||
|
||||
if self.event_on_commit_question.is_set():
|
||||
# called when a question should be commited
|
||||
self.event_on_commit_question.clear()
|
||||
if len(self.buffered_sentence) == 0: raise RuntimeError
|
||||
|
||||
self.commit_wd.begin_watch()
|
||||
chatbot[-1] = list(chatbot[-1])
|
||||
chatbot[-1] = [self.buffered_sentence, "[等待GPT响应]"]
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
# add gpt task 创建子线程请求gpt,避免线程阻塞
|
||||
history = chatbot2history(chatbot)
|
||||
self.agt.add_async_gpt_task(self.buffered_sentence, len(chatbot)-1, llm_kwargs, history, system_prompt)
|
||||
|
||||
self.buffered_sentence = ""
|
||||
chatbot.append(["[请讲话]", "[正在等您说完问题]"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
if len(self.stop_msg) != 0:
|
||||
raise RuntimeError(self.stop_msg)
|
||||
|
||||
|
||||
|
||||
@CatchException
|
||||
def 语音助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
# pip install -U openai-whisper
|
||||
chatbot.append(["对话助手函数插件:使用时,双手离开鼠标键盘吧", "音频助手, 正在听您讲话(点击“停止”键可终止程序)..."])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import nls
|
||||
from scipy import io
|
||||
except:
|
||||
chatbot.append(["导入依赖失败", "使用该模块需要额外依赖, 安装方法:```pip install --upgrade aliyun-python-sdk-core==2.13.3 pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git```"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
APPKEY = get_conf('ALIYUN_APPKEY')
|
||||
if APPKEY == "":
|
||||
chatbot.append(["导入依赖失败", "没有阿里云语音识别APPKEY和TOKEN, 详情见https://help.aliyun.com/document_detail/450255.html"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
ia = InterviewAssistant()
|
||||
yield from ia.begin(llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@@ -36,14 +36,18 @@ def get_meta_information(url, chatbot, history):
|
||||
max_results = 1,
|
||||
sort_by = arxiv.SortCriterion.Relevance,
|
||||
)
|
||||
paper = next(search.results())
|
||||
if string_similar(title, paper.title) > 0.90: # same paper
|
||||
abstract = paper.summary.replace('\n', ' ')
|
||||
is_paper_in_arxiv = True
|
||||
else: # different paper
|
||||
try:
|
||||
paper = next(search.results())
|
||||
if string_similar(title, paper.title) > 0.90: # same paper
|
||||
abstract = paper.summary.replace('\n', ' ')
|
||||
is_paper_in_arxiv = True
|
||||
else: # different paper
|
||||
abstract = abstract
|
||||
is_paper_in_arxiv = False
|
||||
paper = next(search.results())
|
||||
except:
|
||||
abstract = abstract
|
||||
is_paper_in_arxiv = False
|
||||
paper = next(search.results())
|
||||
print(title)
|
||||
print(author)
|
||||
print(citation)
|
||||
@@ -70,6 +74,7 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import arxiv
|
||||
import math
|
||||
from bs4 import BeautifulSoup
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
@@ -80,25 +85,26 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
||||
|
||||
# 清空历史,以免输入溢出
|
||||
history = []
|
||||
|
||||
meta_paper_info_list = yield from get_meta_information(txt, chatbot, history)
|
||||
batchsize = 5
|
||||
for batch in range(math.ceil(len(meta_paper_info_list)/batchsize)):
|
||||
if len(meta_paper_info_list[:batchsize]) > 0:
|
||||
i_say = "下面是一些学术文献的数据,提取出以下内容:" + \
|
||||
"1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开(is_paper_in_arxiv);4、引用数量(cite);5、中文摘要翻译。" + \
|
||||
f"以下是信息源:{str(meta_paper_info_list[:batchsize])}"
|
||||
|
||||
if len(meta_paper_info_list[:10]) > 0:
|
||||
i_say = "下面是一些学术文献的数据,请从中提取出以下内容。" + \
|
||||
"1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开(is_paper_in_arxiv);4、引用数量(cite);5、中文摘要翻译。" + \
|
||||
f"以下是信息源:{str(meta_paper_info_list[:10])}"
|
||||
inputs_show_user = f"请分析此页面中出现的所有文章:{txt},这是第{batch+1}批"
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say, inputs_show_user=inputs_show_user,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||
sys_prompt="你是一个学术翻译,请从数据中提取信息。你必须使用Markdown表格。你必须逐个文献进行处理。"
|
||||
)
|
||||
|
||||
inputs_show_user = f"请分析此页面中出现的所有文章:{txt}"
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say, inputs_show_user=inputs_show_user,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||
sys_prompt="你是一个学术翻译,请从数据中提取信息。你必须使用Markdown格式。你必须逐个文献进行处理。"
|
||||
)
|
||||
history.extend([ f"第{batch+1}批", gpt_say ])
|
||||
meta_paper_info_list = meta_paper_info_list[batchsize:]
|
||||
|
||||
history.extend([ "第一批", gpt_say ])
|
||||
meta_paper_info_list = meta_paper_info_list[10:]
|
||||
|
||||
chatbot.append(["状态?", "已经全部完成"])
|
||||
chatbot.append(["状态?",
|
||||
"已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."])
|
||||
msg = '正常'
|
||||
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
|
||||
res = write_results_to_file(history)
|
||||
|
||||
43
crazy_functions/辅助功能.py
Normal file
43
crazy_functions/辅助功能.py
Normal file
@@ -0,0 +1,43 @@
|
||||
# encoding: utf-8
|
||||
# @Time : 2023/4/19
|
||||
# @Author : Spike
|
||||
# @Descr :
|
||||
from toolbox import update_ui
|
||||
from toolbox import CatchException, report_execption, write_results_to_file, get_log_folder
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
|
||||
|
||||
@CatchException
|
||||
def 猜你想问(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
if txt:
|
||||
show_say = txt
|
||||
prompt = txt+'\n回答完问题后,再列出用户可能提出的三个问题。'
|
||||
else:
|
||||
prompt = history[-1]+"\n分析上述回答,再列出用户可能提出的三个问题。"
|
||||
show_say = '分析上述回答,再列出用户可能提出的三个问题。'
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=prompt,
|
||||
inputs_show_user=show_say,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history=history,
|
||||
sys_prompt=system_prompt
|
||||
)
|
||||
chatbot[-1] = (show_say, gpt_say)
|
||||
history.extend([show_say, gpt_say])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
@CatchException
|
||||
def 清除缓存(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
chatbot.append(['清除本地缓存数据', '执行中. 删除 gpt_log & private_upload'])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
import shutil, os
|
||||
gpt_log_dir = os.path.join(os.path.dirname(__file__), '..', 'gpt_log')
|
||||
private_upload_dir = os.path.join(os.path.dirname(__file__), '..', 'private_upload')
|
||||
shutil.rmtree(gpt_log_dir, ignore_errors=True)
|
||||
shutil.rmtree(private_upload_dir, ignore_errors=True)
|
||||
|
||||
chatbot.append(['清除本地缓存数据', '执行完成'])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
@@ -6,7 +6,7 @@ def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||
plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
|
||||
155
docker-compose.yml
Normal file
155
docker-compose.yml
Normal file
@@ -0,0 +1,155 @@
|
||||
#【请修改完参数后,删除此行】请在以下方案中选择一种,然后删除其他的方案,最后docker-compose up运行 | Please choose from one of these options below, delete other options as well as This Line
|
||||
|
||||
## ===================================================
|
||||
## 【方案一】 如果不需要运行本地模型(仅 chatgpt, azure, 星火, 千帆, claude 等在线大模型服务)
|
||||
## ===================================================
|
||||
version: '3'
|
||||
services:
|
||||
gpt_academic_nolocalllms:
|
||||
image: ghcr.io/binary-husky/gpt_academic_nolocal:master # (Auto Built by Dockerfile: docs/GithubAction+NoLocal)
|
||||
environment:
|
||||
# 请查阅 `config.py` 以查看所有的配置信息
|
||||
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
||||
USE_PROXY: ' True '
|
||||
proxies: ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
|
||||
LLM_MODEL: ' gpt-3.5-turbo '
|
||||
AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "sparkv2", "qianfan"] '
|
||||
WEB_PORT: ' 22303 '
|
||||
ADD_WAIFU: ' True '
|
||||
# THEME: ' Chuanhu-Small-and-Beautiful '
|
||||
# DEFAULT_WORKER_NUM: ' 10 '
|
||||
# AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] '
|
||||
|
||||
# 与宿主的网络融合
|
||||
network_mode: "host"
|
||||
|
||||
# 不使用代理网络拉取最新代码
|
||||
command: >
|
||||
bash -c "python3 -u main.py"
|
||||
|
||||
|
||||
### ===================================================
|
||||
### 【方案二】 如果需要运行ChatGLM + Qwen + MOSS等本地模型
|
||||
### ===================================================
|
||||
version: '3'
|
||||
services:
|
||||
gpt_academic_with_chatglm:
|
||||
image: ghcr.io/binary-husky/gpt_academic_chatglm_moss:master # (Auto Built by Dockerfile: docs/Dockerfile+ChatGLM)
|
||||
environment:
|
||||
# 请查阅 `config.py` 以查看所有的配置信息
|
||||
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
||||
USE_PROXY: ' True '
|
||||
proxies: ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
|
||||
LLM_MODEL: ' gpt-3.5-turbo '
|
||||
AVAIL_LLM_MODELS: ' ["chatglm", "qwen", "moss", "gpt-3.5-turbo", "gpt-4", "newbing"] '
|
||||
LOCAL_MODEL_DEVICE: ' cuda '
|
||||
DEFAULT_WORKER_NUM: ' 10 '
|
||||
WEB_PORT: ' 12303 '
|
||||
ADD_WAIFU: ' True '
|
||||
# AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] '
|
||||
|
||||
# 显卡的使用,nvidia0指第0个GPU
|
||||
runtime: nvidia
|
||||
devices:
|
||||
- /dev/nvidia0:/dev/nvidia0
|
||||
|
||||
# 与宿主的网络融合
|
||||
network_mode: "host"
|
||||
command: >
|
||||
bash -c "python3 -u main.py"
|
||||
|
||||
# P.S. 通过对 command 进行微调,可以便捷地安装额外的依赖
|
||||
# command: >
|
||||
# bash -c "pip install -r request_llm/requirements_qwen.txt && python3 -u main.py"
|
||||
|
||||
### ===================================================
|
||||
### 【方案三】 如果需要运行ChatGPT + LLAMA + 盘古 + RWKV本地模型
|
||||
### ===================================================
|
||||
version: '3'
|
||||
services:
|
||||
gpt_academic_with_rwkv:
|
||||
image: ghcr.io/binary-husky/gpt_academic_jittorllms:master
|
||||
environment:
|
||||
# 请查阅 `config.py` 以查看所有的配置信息
|
||||
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
||||
USE_PROXY: ' True '
|
||||
proxies: ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
|
||||
LLM_MODEL: ' gpt-3.5-turbo '
|
||||
AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "newbing", "jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] '
|
||||
LOCAL_MODEL_DEVICE: ' cuda '
|
||||
DEFAULT_WORKER_NUM: ' 10 '
|
||||
WEB_PORT: ' 12305 '
|
||||
ADD_WAIFU: ' True '
|
||||
# AUTHENTICATION: ' [("username", "passwd"), ("username2", "passwd2")] '
|
||||
|
||||
# 显卡的使用,nvidia0指第0个GPU
|
||||
runtime: nvidia
|
||||
devices:
|
||||
- /dev/nvidia0:/dev/nvidia0
|
||||
|
||||
# 与宿主的网络融合
|
||||
network_mode: "host"
|
||||
|
||||
# 不使用代理网络拉取最新代码
|
||||
command: >
|
||||
python3 -u main.py
|
||||
|
||||
|
||||
## ===================================================
|
||||
## 【方案四】 ChatGPT + Latex
|
||||
## ===================================================
|
||||
version: '3'
|
||||
services:
|
||||
gpt_academic_with_latex:
|
||||
image: ghcr.io/binary-husky/gpt_academic_with_latex:master # (Auto Built by Dockerfile: docs/GithubAction+NoLocal+Latex)
|
||||
environment:
|
||||
# 请查阅 `config.py` 以查看所有的配置信息
|
||||
API_KEY: ' sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx '
|
||||
USE_PROXY: ' True '
|
||||
proxies: ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
|
||||
LLM_MODEL: ' gpt-3.5-turbo '
|
||||
AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "gpt-4"] '
|
||||
LOCAL_MODEL_DEVICE: ' cuda '
|
||||
DEFAULT_WORKER_NUM: ' 10 '
|
||||
WEB_PORT: ' 12303 '
|
||||
|
||||
# 与宿主的网络融合
|
||||
network_mode: "host"
|
||||
|
||||
# 不使用代理网络拉取最新代码
|
||||
command: >
|
||||
bash -c "python3 -u main.py"
|
||||
|
||||
|
||||
## ===================================================
|
||||
## 【方案五】 ChatGPT + 语音助手 (请先阅读 docs/use_audio.md)
|
||||
## ===================================================
|
||||
version: '3'
|
||||
services:
|
||||
gpt_academic_with_audio:
|
||||
image: ghcr.io/binary-husky/gpt_academic_audio_assistant:master
|
||||
environment:
|
||||
# 请查阅 `config.py` 以查看所有的配置信息
|
||||
API_KEY: ' fk195831-IdP0Pb3W6DCMUIbQwVX6MsSiyxwqybyS '
|
||||
USE_PROXY: ' False '
|
||||
proxies: ' None '
|
||||
LLM_MODEL: ' gpt-3.5-turbo '
|
||||
AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "gpt-4"] '
|
||||
ENABLE_AUDIO: ' True '
|
||||
LOCAL_MODEL_DEVICE: ' cuda '
|
||||
DEFAULT_WORKER_NUM: ' 20 '
|
||||
WEB_PORT: ' 12343 '
|
||||
ADD_WAIFU: ' True '
|
||||
THEME: ' Chuanhu-Small-and-Beautiful '
|
||||
ALIYUN_APPKEY: ' RoP1ZrM84DnAFkZK '
|
||||
ALIYUN_TOKEN: ' f37f30e0f9934c34a992f6f64f7eba4f '
|
||||
# (无需填写) ALIYUN_ACCESSKEY: ' LTAI5q6BrFUzoRXVGUWnekh1 '
|
||||
# (无需填写) ALIYUN_SECRET: ' eHmI20AVWIaQZ0CiTD2bGQVsaP9i68 '
|
||||
|
||||
# 与宿主的网络融合
|
||||
network_mode: "host"
|
||||
|
||||
# 不使用代理网络拉取最新代码
|
||||
command: >
|
||||
bash -c "python3 -u main.py"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# How to build | 如何构建: docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
|
||||
# How to run | 如何运行 (1) 直接运行(选择0号GPU): docker run --rm -it --net=host --gpus="0" gpt-academic
|
||||
# How to run | 如何运行 (2) 我想运行之前进容器做一些调整: docker run --rm -it --net=host --gpus="0" gpt-academic bash
|
||||
# How to run | (1) 我想直接一键运行(选择0号GPU): docker run --rm -it --net=host --gpus \"device=0\" gpt-academic
|
||||
# How to run | (2) 我想运行之前进容器做一些调整(选择1号GPU): docker run --rm -it --net=host --gpus \"device=1\" gpt-academic bash
|
||||
|
||||
# 从NVIDIA源,从而支持显卡运损(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
|
||||
FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
|
||||
@@ -14,6 +14,7 @@ RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
|
||||
RUN $useProxyNetwork curl cip.cc
|
||||
RUN sed -i '$ d' /etc/proxychains.conf
|
||||
RUN sed -i '$ d' /etc/proxychains.conf
|
||||
# 在这里填写主机的代理协议(用于从github拉取代码)
|
||||
RUN echo "socks5 127.0.0.1 10880" >> /etc/proxychains.conf
|
||||
ARG useProxyNetwork=proxychains
|
||||
# # comment out above if you do not need proxy network | 如果不需要翻墙 - 从此行向上删除
|
||||
@@ -21,14 +22,15 @@ ARG useProxyNetwork=proxychains
|
||||
|
||||
# use python3 as the system default python
|
||||
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
|
||||
|
||||
# 下载pytorch
|
||||
RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
|
||||
# 下载分支
|
||||
WORKDIR /gpt
|
||||
RUN $useProxyNetwork git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
WORKDIR /gpt/chatgpt_academic
|
||||
RUN $useProxyNetwork git clone https://github.com/binary-husky/gpt_academic.git
|
||||
WORKDIR /gpt/gpt_academic
|
||||
RUN $useProxyNetwork python3 -m pip install -r requirements.txt
|
||||
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_chatglm.txt
|
||||
RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
|
||||
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_newbing.txt
|
||||
|
||||
# 预热CHATGLM参数(非必要 可选步骤)
|
||||
RUN echo ' \n\
|
||||
@@ -48,6 +50,7 @@ RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
# 可同时填写多个API-KEY,支持openai的key和api2d的key共存,用英文逗号分割,例如API_KEY = "sk-openaikey1,fkxxxx-api2dkey2,........"
|
||||
# LLM_MODEL 是选择初始的模型
|
||||
# LOCAL_MODEL_DEVICE 是选择chatglm等本地模型运行的设备,可选 cpu 和 cuda
|
||||
# [说明: 以下内容与`config.py`一一对应,请查阅config.py来完成一下配置的填写]
|
||||
RUN echo ' \n\
|
||||
API_KEY = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \n\
|
||||
USE_PROXY = True \n\
|
||||
|
||||
59
docs/Dockerfile+JittorLLM
Normal file
59
docs/Dockerfile+JittorLLM
Normal file
@@ -0,0 +1,59 @@
|
||||
# How to build | 如何构建: docker build -t gpt-academic-jittor --network=host -f Dockerfile+ChatGLM .
|
||||
# How to run | (1) 我想直接一键运行(选择0号GPU): docker run --rm -it --net=host --gpus \"device=0\" gpt-academic-jittor bash
|
||||
# How to run | (2) 我想运行之前进容器做一些调整(选择1号GPU): docker run --rm -it --net=host --gpus \"device=1\" gpt-academic-jittor bash
|
||||
|
||||
# 从NVIDIA源,从而支持显卡运损(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
|
||||
FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
|
||||
ARG useProxyNetwork=''
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y curl proxychains curl g++
|
||||
RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
|
||||
|
||||
# 配置代理网络(构建Docker镜像时使用)
|
||||
# # comment out below if you do not need proxy network | 如果不需要翻墙 - 从此行向下删除
|
||||
RUN $useProxyNetwork curl cip.cc
|
||||
RUN sed -i '$ d' /etc/proxychains.conf
|
||||
RUN sed -i '$ d' /etc/proxychains.conf
|
||||
# 在这里填写主机的代理协议(用于从github拉取代码)
|
||||
RUN echo "socks5 127.0.0.1 10880" >> /etc/proxychains.conf
|
||||
ARG useProxyNetwork=proxychains
|
||||
# # comment out above if you do not need proxy network | 如果不需要翻墙 - 从此行向上删除
|
||||
|
||||
|
||||
# use python3 as the system default python
|
||||
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
|
||||
# 下载pytorch
|
||||
RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
|
||||
# 下载分支
|
||||
WORKDIR /gpt
|
||||
RUN $useProxyNetwork git clone https://github.com/binary-husky/gpt_academic.git
|
||||
WORKDIR /gpt/gpt_academic
|
||||
RUN $useProxyNetwork python3 -m pip install -r requirements.txt
|
||||
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_chatglm.txt
|
||||
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_newbing.txt
|
||||
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I
|
||||
|
||||
# 下载JittorLLMs
|
||||
RUN $useProxyNetwork git clone https://github.com/binary-husky/JittorLLMs.git --depth 1 request_llm/jittorllms
|
||||
|
||||
# 禁用缓存,确保更新代码
|
||||
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
|
||||
RUN $useProxyNetwork git pull
|
||||
|
||||
# 预热Tiktoken模块
|
||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
|
||||
# 为chatgpt-academic配置代理和API-KEY (非必要 可选步骤)
|
||||
# 可同时填写多个API-KEY,支持openai的key和api2d的key共存,用英文逗号分割,例如API_KEY = "sk-openaikey1,fkxxxx-api2dkey2,........"
|
||||
# LLM_MODEL 是选择初始的模型
|
||||
# LOCAL_MODEL_DEVICE 是选择chatglm等本地模型运行的设备,可选 cpu 和 cuda
|
||||
# [说明: 以下内容与`config.py`一一对应,请查阅config.py来完成一下配置的填写]
|
||||
RUN echo ' \n\
|
||||
API_KEY = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \n\
|
||||
USE_PROXY = True \n\
|
||||
LLM_MODEL = "chatglm" \n\
|
||||
LOCAL_MODEL_DEVICE = "cuda" \n\
|
||||
proxies = { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } ' >> config_private.py
|
||||
|
||||
# 启动
|
||||
CMD ["python3", "-u", "main.py"]
|
||||
27
docs/Dockerfile+NoLocal+Latex
Normal file
27
docs/Dockerfile+NoLocal+Latex
Normal file
@@ -0,0 +1,27 @@
|
||||
# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
|
||||
# - 1 修改 `config.py`
|
||||
# - 2 构建 docker build -t gpt-academic-nolocal-latex -f docs/Dockerfile+NoLocal+Latex .
|
||||
# - 3 运行 docker run -v /home/fuqingxu/arxiv_cache:/root/arxiv_cache --rm -it --net=host gpt-academic-nolocal-latex
|
||||
|
||||
FROM fuqingxu/python311_texlive_ctex:latest
|
||||
|
||||
# 指定路径
|
||||
WORKDIR /gpt
|
||||
|
||||
ARG useProxyNetwork=''
|
||||
|
||||
RUN $useProxyNetwork pip3 install gradio openai numpy arxiv rich -i https://pypi.douban.com/simple/
|
||||
RUN $useProxyNetwork pip3 install colorama Markdown pygments pymupdf -i https://pypi.douban.com/simple/
|
||||
|
||||
# 装载项目文件
|
||||
COPY . .
|
||||
|
||||
|
||||
# 安装依赖
|
||||
RUN $useProxyNetwork pip3 install -r requirements.txt -i https://pypi.douban.com/simple/
|
||||
|
||||
# 可选步骤,用于预热模块
|
||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
|
||||
# 启动
|
||||
CMD ["python3", "-u", "main.py"]
|
||||
31
docs/GithubAction+ChatGLM+Moss
Normal file
31
docs/GithubAction+ChatGLM+Moss
Normal file
@@ -0,0 +1,31 @@
|
||||
|
||||
# 从NVIDIA源,从而支持显卡运损(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
|
||||
FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
|
||||
ARG useProxyNetwork=''
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y curl proxychains curl gcc
|
||||
RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
|
||||
|
||||
|
||||
# use python3 as the system default python
|
||||
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
|
||||
# 下载pytorch
|
||||
RUN python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
|
||||
# 下载分支
|
||||
WORKDIR /gpt
|
||||
RUN git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
||||
WORKDIR /gpt/gpt_academic
|
||||
RUN git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss
|
||||
RUN python3 -m pip install -r requirements.txt
|
||||
RUN python3 -m pip install -r request_llm/requirements_moss.txt
|
||||
RUN python3 -m pip install -r request_llm/requirements_qwen.txt
|
||||
RUN python3 -m pip install -r request_llm/requirements_chatglm.txt
|
||||
RUN python3 -m pip install -r request_llm/requirements_newbing.txt
|
||||
|
||||
|
||||
|
||||
# 预热Tiktoken模块
|
||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
|
||||
# 启动
|
||||
CMD ["python3", "-u", "main.py"]
|
||||
34
docs/GithubAction+JittorLLMs
Normal file
34
docs/GithubAction+JittorLLMs
Normal file
@@ -0,0 +1,34 @@
|
||||
# 从NVIDIA源,从而支持显卡运损(检查宿主的nvidia-smi中的cuda版本必须>=11.3)
|
||||
FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
|
||||
ARG useProxyNetwork=''
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y curl proxychains curl g++
|
||||
RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
|
||||
|
||||
# use python3 as the system default python
|
||||
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
|
||||
|
||||
# 下载pytorch
|
||||
RUN python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
|
||||
|
||||
# 下载分支
|
||||
WORKDIR /gpt
|
||||
RUN git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
||||
WORKDIR /gpt/gpt_academic
|
||||
RUN python3 -m pip install -r requirements.txt
|
||||
RUN python3 -m pip install -r request_llm/requirements_chatglm.txt
|
||||
RUN python3 -m pip install -r request_llm/requirements_newbing.txt
|
||||
RUN python3 -m pip install -r request_llm/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I
|
||||
|
||||
# 下载JittorLLMs
|
||||
RUN git clone https://github.com/binary-husky/JittorLLMs.git --depth 1 request_llm/jittorllms
|
||||
|
||||
# 禁用缓存,确保更新代码
|
||||
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
|
||||
RUN git pull
|
||||
|
||||
# 预热Tiktoken模块
|
||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
|
||||
# 启动
|
||||
CMD ["python3", "-u", "main.py"]
|
||||
20
docs/GithubAction+NoLocal
Normal file
20
docs/GithubAction+NoLocal
Normal file
@@ -0,0 +1,20 @@
|
||||
# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
|
||||
# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic-nolocal -f docs/Dockerfile+NoLocal .
|
||||
# 如何运行: docker run --rm -it --net=host gpt-academic-nolocal
|
||||
FROM python:3.11
|
||||
|
||||
# 指定路径
|
||||
WORKDIR /gpt
|
||||
|
||||
# 装载项目文件
|
||||
COPY . .
|
||||
|
||||
# 安装依赖
|
||||
RUN pip3 install -r requirements.txt
|
||||
|
||||
|
||||
# 可选步骤,用于预热模块
|
||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
|
||||
# 启动
|
||||
CMD ["python3", "-u", "main.py"]
|
||||
22
docs/GithubAction+NoLocal+AudioAssistant
Normal file
22
docs/GithubAction+NoLocal+AudioAssistant
Normal file
@@ -0,0 +1,22 @@
|
||||
# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
|
||||
# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic-nolocal -f docs/Dockerfile+NoLocal .
|
||||
# 如何运行: docker run --rm -it --net=host gpt-academic-nolocal
|
||||
FROM python:3.11
|
||||
|
||||
# 指定路径
|
||||
WORKDIR /gpt
|
||||
|
||||
# 装载项目文件
|
||||
COPY . .
|
||||
|
||||
# 安装依赖
|
||||
RUN pip3 install -r requirements.txt
|
||||
|
||||
# 安装语音插件的额外依赖
|
||||
RUN pip3 install pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
||||
|
||||
# 可选步骤,用于预热模块
|
||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
|
||||
# 启动
|
||||
CMD ["python3", "-u", "main.py"]
|
||||
25
docs/GithubAction+NoLocal+Latex
Normal file
25
docs/GithubAction+NoLocal+Latex
Normal file
@@ -0,0 +1,25 @@
|
||||
# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
|
||||
# - 1 修改 `config.py`
|
||||
# - 2 构建 docker build -t gpt-academic-nolocal-latex -f docs/Dockerfile+NoLocal+Latex .
|
||||
# - 3 运行 docker run -v /home/fuqingxu/arxiv_cache:/root/arxiv_cache --rm -it --net=host gpt-academic-nolocal-latex
|
||||
|
||||
FROM fuqingxu/python311_texlive_ctex:latest
|
||||
|
||||
# 指定路径
|
||||
WORKDIR /gpt
|
||||
|
||||
RUN pip3 install gradio openai numpy arxiv rich
|
||||
RUN pip3 install colorama Markdown pygments pymupdf
|
||||
|
||||
# 装载项目文件
|
||||
COPY . .
|
||||
|
||||
|
||||
# 安装依赖
|
||||
RUN pip3 install -r requirements.txt
|
||||
|
||||
# 可选步骤,用于预热模块
|
||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
|
||||
# 启动
|
||||
CMD ["python3", "-u", "main.py"]
|
||||
307
docs/README.md.German.md
Normal file
307
docs/README.md.German.md
Normal file
@@ -0,0 +1,307 @@
|
||||
> **Hinweis**
|
||||
>
|
||||
> Bei der Installation von Abhängigkeiten sollten nur die in **requirements.txt** **angegebenen Versionen** streng ausgewählt werden.
|
||||
>
|
||||
> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`
|
||||
|
||||
# <img src="docs/logo.png" width="40" > GPT Akademisch optimiert (GPT Academic)
|
||||
|
||||
**Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Stern; wenn Sie bessere Tastenkombinationen oder Funktions-Plugins entwickelt haben, können Sie gerne einen Pull Request eröffnen.**
|
||||
|
||||
Wenn Sie dieses Projekt mögen, geben Sie ihm bitte einen Stern. Wenn Sie weitere nützliche wissenschaftliche Abkürzungen oder funktionale Plugins entwickelt haben, können Sie gerne ein Problem oder eine Pull-Anforderung öffnen. Wir haben auch ein README in [Englisch|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md), das von diesem Projekt selbst übersetzt wurde.
|
||||
Um dieses Projekt in eine beliebige Sprache mit GPT zu übersetzen, lesen Sie `multi_language.py` (experimentell).
|
||||
|
||||
> **Hinweis**
|
||||
>
|
||||
> 1. Beachten Sie bitte, dass nur Funktionserweiterungen (Schaltflächen) mit **roter Farbe** Dateien lesen können und einige Erweiterungen im **Dropdown-Menü** des Erweiterungsbereichs zu finden sind. Außerdem begrüßen wir jede neue Funktionserweiterung mit **höchster Priorität** und bearbeiten sie.
|
||||
>
|
||||
> 2. Die Funktionalität jeder Datei in diesem Projekt wird in der Selbstanalyse [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) detailliert beschrieben. Mit der Weiterentwicklung der Versionen können Sie jederzeit die zugehörigen Funktions-Erweiterungen aufrufen, um durch Aufruf von GPT einen Selbstanalysebericht des Projekts zu erstellen. Häufig gestellte Fragen finden Sie in der [`Wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installationsanweisungen](#Installation).
|
||||
>
|
||||
> 3. Dieses Projekt ist kompatibel und fördert die Verwendung von inländischen Sprachmodellen wie ChatGLM und RWKV, Pangu, etc. Es unterstützt das Vorhandensein mehrerer api-keys, die in der Konfigurationsdatei wie folgt angegeben werden können: `API_KEY="openai-key1,openai-key2,api2d-key3"`. Wenn ein `API_KEY` temporär geändert werden muss, geben Sie den temporären `API_KEY` im Eingabebereich ein und drücken Sie dann die Eingabetaste, um ihn zu übernehmen.Funktion | Beschreibung
|
||||
--- | ---
|
||||
Ein-Klick-Polieren | Unterstützt ein-Klick-Polieren und ein-Klick-Suche nach grammatikalischen Fehlern in wissenschaftlichen Arbeiten
|
||||
Ein-Klick Chinesisch-Englisch Übersetzung | Ein-Klick Chinesisch-Englisch Übersetzung
|
||||
Ein-Klick-Code-Erklärung | Zeigt Code, erklärt Code, erzeugt Code und fügt Kommentare zum Code hinzu
|
||||
[Benutzerdefinierte Tastenkombinationen](https://www.bilibili.com/video/BV14s4y1E7jN) | Unterstützt benutzerdefinierte Tastenkombinationen
|
||||
Modulare Gestaltung | Unterstützt leistungsstarke individuelle [Funktions-Plugins](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions). Plugins unterstützen [Hot-Updates](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
[Selbstprogramm-Analyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] [Ein-Klick Verstehen](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) der Quellcode dieses Projekts
|
||||
[Programmanalyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] Ein-Klick-Analyse des Projektbaums anderer Python/C/C++/Java/Lua/...-Projekte
|
||||
Lesen von Papieren, [Übersetzen](https://www.bilibili.com/video/BV1KT411x7Wn) von Papieren | [Funktions-Plugin] Ein-Klick Erklärung des gesamten LaTeX/PDF-Artikels und Erstellung einer Zusammenfassung
|
||||
LaTeX-Volltext-Übersetzung und [Polieren](https://www.bilibili.com/video/BV1FT411H7c5/) | [Funktions-Plugin] Ein-Klick-Übersetzung oder-Polieren des LaTeX-Artikels
|
||||
Bulk-Kommentargenerierung | [Funktions-Plugin] Ein-Klick Massenerstellung von Funktionskommentaren
|
||||
Markdown [Chinesisch-Englisch Übersetzung](https://www.bilibili.com/video/BV1yo4y157jV/) | [Funktions-Plugin] Haben Sie die [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) in den oben genannten 5 Sprachen gesehen?
|
||||
Analyse-Berichtserstellung von chat | [Funktions-Plugin] Automatische Zusammenfassung nach der Ausführung
|
||||
[Funktion zur vollständigen Übersetzung von PDF-Artikeln](https://www.bilibili.com/video/BV1KT411x7Wn) | [Funktions-Plugin] Extrahiert Titel und Zusammenfassung der PDF-Artikel und übersetzt den gesamten Text (mehrere Threads)
|
||||
[Arxiv-Assistent](https://www.bilibili.com/video/BV1LM4y1279X) | [Funktions-Plugin] Geben Sie die Arxiv-Artikel-URL ein und klicken Sie auf Eine-Klick-Übersetzung-Zusammenfassung + PDF-Download
|
||||
[Google Scholar Integrations-Assistent](https://www.bilibili.com/video/BV19L411U7ia) | [Funktions-Plugin] Geben Sie eine beliebige Google Scholar Such-URL ein und lassen Sie gpt Ihnen bei der Erstellung von [relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) helfen
|
||||
Internet-Informationen Aggregation + GPT | [Funktions-Plugin] Lassen Sie GPT eine Frage beantworten, indem es [zuerst Informationen aus dem Internet](https://www.bilibili.com/video/BV1om4y127ck/) sammelt und so die Informationen nie veralten
|
||||
Anzeige von Formeln / Bildern / Tabellen | Zeigt Formeln in beiden Formen, [TeX-Format und gerendeter Form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), unterstützt Formeln und Code-Highlights
|
||||
Unterstützung von PlugIns mit mehreren Threads | Unterstützt den Aufruf mehrerer Threads in Chatgpt, um Text oder Programme [Batch zu verarbeiten](https://www.bilibili.com/video/BV1FT411H7c5/)
|
||||
Starten Sie das dunkle Gradio-[Thema](https://github.com/binary-husky/gpt_academic/issues/173) | Fügen Sie ```/?__theme=dark``` an das Ende der Browser-URL an, um das dunkle Thema zu aktivieren
|
||||
[Unterstützung für mehrere LLM-Modelle](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) Interface-Unterstützung | Das Gefühl, gleichzeitig von GPT3.5, GPT4, [Tshinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) bedient zu werden, muss toll sein, oder?
|
||||
Zugriff auf weitere LLM-Modelle, Unterstützung von [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Hinzufügen der Newbing-Schnittstelle (neues Bing), Einführung der Unterstützung von [Jittorllms](https://github.com/Jittor/JittorLLMs) der Tsinghua-Universität, [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) und [Pangu alpha](https://openi.org.cn/pangu/)
|
||||
Weitere neue Funktionen (wie Bildgenerierung) …… | Siehe Ende dieses Dokuments ……
|
||||
|
||||
- Neue Oberfläche (Ändern Sie die LAYOUT-Option in `config.py`, um zwischen "Seitenlayout" und "Oben-unten-Layout" zu wechseln)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
|
||||
</div>- All buttons are dynamically generated by reading `functional.py`, and custom functions can be easily added, freeing up the clipboard.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- Proofreading/Correcting
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- If the output contains formulas, they will be displayed in both tex format and rendered format for easy copying and reading.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
|
||||
</div>
|
||||
|
||||
- Don't feel like reading the project code? Show off the entire project to chatgpt.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
|
||||
</div>
|
||||
|
||||
- Multiple large language models are mixed and called together (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4).
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
||||
</div>
|
||||
|
||||
---
|
||||
# Installation
|
||||
## Installation-Method 1: Run directly (Windows, Linux or MacOS)
|
||||
|
||||
1. Download the project
|
||||
```sh
|
||||
git clone https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
```
|
||||
|
||||
2. Configure API_KEY
|
||||
|
||||
Configure API KEY and other settings in `config.py`. [Special Network Environment Settings](https://github.com/binary-husky/gpt_academic/issues/1).
|
||||
|
||||
(P.S. When the program is running, it will first check whether there is a "config_private.py" private configuration file, and use the configuration defined in it to override the configuration of "config.py". Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named "config_private.py" next to "config.py" and transfer (copy) the configurations in "config.py" to "config_private.py". "config_private.py" is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Reading priority: `environment variable` > `config_private.py` >`config.py`)
|
||||
|
||||
|
||||
3. Install dependencies
|
||||
```sh
|
||||
# (Option I: If familar with Python) (Python version 3.9 or above, the newer the better), Note: Use the official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
# (Option II: If not familiar with Python) Use anaconda with similar steps (https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # Create an anaconda environment
|
||||
conda activate gptac_venv # Activate the anaconda environment
|
||||
python -m pip install -r requirements.txt # Same step as pip installation
|
||||
```
|
||||
|
||||
<details><summary>Click to expand if supporting Tsinghua ChatGLM/Fudan MOSS as backend</summary>
|
||||
<p>
|
||||
|
||||
[Optional Step] If supporting Tsinghua ChatGLM/Fudan MOSS as backend, additional dependencies need to be installed (Prerequisites: Familiar with Python + Used Pytorch + Sufficient computer configuration):
|
||||
```sh
|
||||
# [Optional Step I] Support Tsinghua ChatGLM. Remark: If encountering "Call ChatGLM fail Cannot load ChatGLM parameters", please refer to the following: 1: The above default installation is torch+cpu version. To use cuda, uninstall torch and reinstall torch+cuda; 2: If the model cannot be loaded due to insufficient machine configuration, you can modify the model precision in `request_llm/bridge_chatglm.py`, and modify all AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
||||
python -m pip install -r request_llm/requirements_chatglm.txt
|
||||
|
||||
# [Optional Step II] Support Fudan MOSS
|
||||
python -m pip install -r request_llm/requirements_moss.txt
|
||||
git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the project root path
|
||||
|
||||
# [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently supported models are as follows (jittorllms series currently only supports docker solutions):
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
```
|
||||
|
||||
</p>
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
4. Run
|
||||
```sh
|
||||
python main.py
|
||||
```5. Testing Function Plugin
|
||||
```
|
||||
- Test function plugin template function (requires gpt to answer what happened today in history), you can use this function as a template to implement more complex functions
|
||||
Click "[Function Plugin Template Demo] Today in History"
|
||||
```
|
||||
|
||||
## Installation-Method 2: Using Docker
|
||||
|
||||
1. Only ChatGPT (Recommended for most people)
|
||||
|
||||
``` sh
|
||||
git clone https://github.com/binary-husky/gpt_academic.git # Download the project
|
||||
cd gpt_academic # Enter the path
|
||||
nano config.py # Edit config.py with any text editor, Configure "Proxy","API_KEY"and"WEB_PORT" (e.g 50923) etc.
|
||||
docker build -t gpt-academic . # Install
|
||||
|
||||
# (Last step-option 1) Under Linux environment, use `--net=host` is more convenient and quick
|
||||
docker run --rm -it --net=host gpt-academic
|
||||
# (Last step-option 2) Under macOS/windows environment, can only use the -p option to expose the container's port(eg.50923) to the port on the host.
|
||||
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
||||
```
|
||||
|
||||
2. ChatGPT + ChatGLM + MOSS (Requires familiarity with Docker)
|
||||
|
||||
``` sh
|
||||
# Modify docker-compose.yml, delete solution 1 and solution 3, and retain solution 2. Modify the configuration of solution 2 in docker-compose.yml, referring to the comments in it.
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
3. ChatGPT+LLAMA+Pangu+RWKV(Requires familiarity with Docker)
|
||||
``` sh
|
||||
# Modify docker-compose.yml, delete solution 1 and solution 2, and retain solution 3. Modify the configuration of solution 3 in docker-compose.yml, referring to the comments in it.
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
|
||||
## Installation-Method 3: Other Deployment Options
|
||||
|
||||
1. How to use reverse proxy URL/Microsoft Azure API
|
||||
Configure API_URL_REDIRECT according to the instructions in `config.py`.
|
||||
|
||||
2. Remote cloud server deployment (requires cloud server knowledge and experience)
|
||||
Please visit [Deployment wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
|
||||
3. Using WSL 2 (Windows subsystem for Linux)
|
||||
Please visit [Deployment wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
|
||||
4. How to run at a secondary URL (such as `http://localhost/subpath`)
|
||||
Please visit [FastAPI operating instructions](docs/WithFastapi.md)
|
||||
|
||||
5. Use docker-compose to run
|
||||
Please read docker-compose.yml and follow the prompts to operate.
|
||||
|
||||
---
|
||||
# Advanced Usage
|
||||
## Customize new convenience buttons / custom function plugins.
|
||||
|
||||
1. Customize new convenience buttons (Academic Shortcut Keys)
|
||||
Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, then the prefix and suffix can be hot-modified, and it will take effect without restarting the program.)
|
||||
For example
|
||||
```
|
||||
"Super English to Chinese": {
|
||||
# Prefix, will be added before your input. For example, used to describe your requirements, such as translation, explaining code, polishing, etc.
|
||||
"Prefix": "Please translate the following content into Chinese, and then use a markdown table to explain the proper nouns that appear in the text one by one:\n\n",
|
||||
|
||||
# Suffix, will be added after your input. For example, combined with prefix, you can enclose your input content in quotes.
|
||||
"Suffix": "",
|
||||
},
|
||||
```
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. Custom function plugins
|
||||
|
||||
Write powerful function plugins to perform any task you want and can't think of.
|
||||
The difficulty of plugin writing and debugging is very low in this project. As long as you have a certain knowledge of Python, you can implement your own plugin functions by imitating the template we provided.
|
||||
For more information, please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
||||
|
||||
---
|
||||
# Latest Update
|
||||
## New feature dynamics1. Funktion zur Speicherung von Dialogen. Rufen Sie im Bereich der Funktions-Plugins "Aktuellen Dialog speichern" auf, um den aktuellen Dialog als lesbares und wiederherstellbares HTML-Datei zu speichern. Darüber hinaus können Sie im Funktions-Plugin-Bereich (Dropdown-Menü) "Laden von Dialogverlauf" aufrufen, um den vorherigen Dialog wiederherzustellen. Tipp: Wenn Sie keine Datei angeben und stattdessen direkt auf "Laden des Dialogverlaufs" klicken, können Sie das HTML-Cache-Archiv anzeigen. Durch Klicken auf "Löschen aller lokalen Dialogverlaufsdatensätze" können alle HTML-Archiv-Caches gelöscht werden.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. Berichterstellung. Die meisten Plugins generieren nach Abschluss der Ausführung einen Arbeitsbericht.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
|
||||
</div>
|
||||
|
||||
3. Modularisierte Funktionsgestaltung, einfache Schnittstellen mit leistungsstarken Funktionen.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
|
||||
</div>
|
||||
|
||||
4. Dies ist ein Open-Source-Projekt, das sich "selbst übersetzen" kann.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="500" >
|
||||
</div>
|
||||
|
||||
5. Die Übersetzung anderer Open-Source-Projekte ist kein Problem.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="500" >
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="500" >
|
||||
</div>
|
||||
|
||||
6. Dekorieren Sie [`live2d`](https://github.com/fghrsh/live2d_demo) mit kleinen Funktionen (standardmäßig deaktiviert, Änderungen an `config.py` erforderlich).
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500" >
|
||||
</div>
|
||||
|
||||
7. Neue MOSS-Sprachmodellunterstützung.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500" >
|
||||
</div>
|
||||
|
||||
8. OpenAI-Bildgenerierung.
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
|
||||
</div>
|
||||
|
||||
9. OpenAI-Audio-Analyse und Zusammenfassung.
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
|
||||
</div>
|
||||
|
||||
10. Latex-Proofreading des gesamten Textes.
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" width="500" >
|
||||
</div>
|
||||
|
||||
|
||||
## Version:
|
||||
- Version 3.5 (Todo): Rufen Sie alle Funktionserweiterungen dieses Projekts mit natürlicher Sprache auf (hohe Priorität).
|
||||
- Version 3.4 (Todo): Verbesserte Unterstützung mehrerer Threads für Local Large Model (LLM).
|
||||
- Version 3.3: + Internet-Informationssynthese-Funktion
|
||||
- Version 3.2: Funktionserweiterungen unterstützen mehr Parameter-Schnittstellen (Speicherung von Dialogen, Interpretation beliebigen Sprachcodes + gleichzeitige Abfrage jeder LLM-Kombination)
|
||||
- Version 3.1: Unterstützung mehrerer GPT-Modelle gleichzeitig! Unterstützung für API2D, Unterstützung für Lastenausgleich von mehreren API-Schlüsseln.
|
||||
- Version 3.0: Unterstützung von Chatglm und anderen kleinen LLMs
|
||||
- Version 2.6: Umstrukturierung der Plugin-Struktur zur Verbesserung der Interaktivität, Einführung weiterer Plugins
|
||||
- Version 2.5: Automatische Aktualisierung, Problembehebung bei Quelltexten großer Projekte, wenn der Text zu lang ist oder Token überlaufen.
|
||||
- Version 2.4: (1) Neue Funktion zur Übersetzung des gesamten PDF-Texts; (2) Neue Funktion zum Wechseln der Position des Eingabebereichs; (3) Neue Option für vertikales Layout; (4) Optimierung von Multithread-Funktions-Plugins.
|
||||
- Version 2.3: Verbesserte Interaktivität mit mehreren Threads
|
||||
- Version 2.2: Funktionserweiterungen unterstützen "Hot-Reload"
|
||||
- Version 2.1: Faltbares Layout
|
||||
- Version 2.0: Einführung von modularisierten Funktionserweiterungen
|
||||
- Version 1.0: Grundlegende Funktionengpt_academic Entwickler QQ-Gruppe-2: 610599535
|
||||
|
||||
- Bekannte Probleme
|
||||
- Einige Browser-Übersetzungs-Plugins können die Frontend-Ausführung dieser Software stören.
|
||||
- Sowohl eine zu hohe als auch eine zu niedrige Version von Gradio führt zu verschiedenen Ausnahmen.
|
||||
|
||||
## Referenz und Lernen
|
||||
|
||||
```
|
||||
Der Code bezieht sich auf viele Designs von anderen herausragenden Projekten, insbesondere:
|
||||
|
||||
# Projekt 1: ChatGLM-6B der Tsinghua Universität:
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
|
||||
# Projekt 2: JittorLLMs der Tsinghua Universität:
|
||||
https://github.com/Jittor/JittorLLMs
|
||||
|
||||
# Projekt 3: Edge-GPT:
|
||||
https://github.com/acheong08/EdgeGPT
|
||||
|
||||
# Projekt 4: ChuanhuChatGPT:
|
||||
https://github.com/GaiZhenbiao/ChuanhuChatGPT
|
||||
|
||||
# Projekt 5: ChatPaper:
|
||||
https://github.com/kaixindelele/ChatPaper
|
||||
|
||||
# Mehr:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
```
|
||||
316
docs/README.md.Italian.md
Normal file
316
docs/README.md.Italian.md
Normal file
@@ -0,0 +1,316 @@
|
||||
> **Nota**
|
||||
>
|
||||
> Durante l'installazione delle dipendenze, selezionare rigorosamente le **versioni specificate** nel file requirements.txt.
|
||||
>
|
||||
> ` pip install -r requirements.txt`
|
||||
|
||||
# <img src="logo.png" width="40" > GPT Ottimizzazione Accademica (GPT Academic)
|
||||
|
||||
**Se ti piace questo progetto, ti preghiamo di dargli una stella. Se hai sviluppato scorciatoie accademiche o plugin funzionali più utili, non esitare ad aprire una issue o pull request. Abbiamo anche una README in [Inglese|](README_EN.md)[Giapponese|](README_JP.md)[Coreano|](https://github.com/mldljyh/ko_gpt_academic)[Russo|](README_RS.md)[Francese](README_FR.md) tradotta da questo stesso progetto.
|
||||
Per tradurre questo progetto in qualsiasi lingua con GPT, leggere e eseguire [`multi_language.py`](multi_language.py) (sperimentale).
|
||||
|
||||
> **Nota**
|
||||
>
|
||||
> 1. Si prega di notare che solo i plugin (pulsanti) contrassegnati in **rosso** supportano la lettura di file, alcuni plugin sono posizionati nel **menu a discesa** nella zona dei plugin. Accettiamo e gestiamo PR per qualsiasi nuovo plugin con **massima priorità**!
|
||||
>
|
||||
> 2. Le funzionalità di ogni file di questo progetto sono descritte dettagliatamente nella propria analisi di autotraduzione [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Con l'iterazione delle versioni, è possibile fare clic sui plugin funzionali correlati in qualsiasi momento per richiamare GPT e generare nuovamente il rapporto di analisi automatica del progetto. Le domande frequenti sono riassunte nella [`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Metodo di installazione] (#installazione).
|
||||
>
|
||||
> 3. Questo progetto è compatibile e incoraggia l'utilizzo di grandi modelli di linguaggio di produzione nazionale come chatglm, RWKV, Pangu ecc. Supporta la coesistenza di più api-key e può essere compilato nel file di configurazione come `API_KEY="openai-key1,openai-key2,api2d-key3"`. Per sostituire temporaneamente `API_KEY`, inserire `API_KEY` temporaneo nell'area di input e premere Invio per renderlo effettivo.
|
||||
|
||||
<div align="center">
|
||||
|
||||
Funzione | Descrizione
|
||||
--- | ---
|
||||
Correzione immediata | Supporta correzione immediata e ricerca degli errori di grammatica del documento con un solo clic
|
||||
Traduzione cinese-inglese immediata | Traduzione cinese-inglese immediata con un solo clic
|
||||
Spiegazione del codice immediata | Visualizzazione del codice, spiegazione del codice, generazione del codice, annotazione del codice con un solo clic
|
||||
[Scorciatoie personalizzate](https://www.bilibili.com/video/BV14s4y1E7jN) | Supporta scorciatoie personalizzate
|
||||
Design modularizzato | Supporta potenti [plugin di funzioni](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions) personalizzati, i plugin supportano l'[aggiornamento in tempo reale](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
[Auto-profiling del programma](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin di funzioni] [Comprensione immediata](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) del codice sorgente di questo progetto
|
||||
[Analisi del programma](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin di funzioni] Un clic può analizzare l'albero di altri progetti Python/C/C++/Java/Lua/...
|
||||
Lettura del documento, [traduzione](https://www.bilibili.com/video/BV1KT411x7Wn) del documento | [Plugin di funzioni] La lettura immediata dell'intero documento latex/pdf di un documento e la generazione di un riassunto
|
||||
Traduzione completa di un documento Latex, [correzione immediata](https://www.bilibili.com/video/BV1FT411H7c5/) | [Plugin di funzioni] Una traduzione o correzione immediata di un documento Latex
|
||||
Generazione di annotazioni in batch | [Plugin di funzioni] Generazione automatica delle annotazioni di funzione con un solo clic
|
||||
[Traduzione cinese-inglese di Markdown](https://www.bilibili.com/video/BV1yo4y157jV/) | [Plugin di funzioni] Hai letto il [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) delle cinque lingue sopra?
|
||||
Generazione di report di analisi di chat | [Plugin di funzioni] Generazione automatica di un rapporto di sintesi dopo l'esecuzione
|
||||
[Funzione di traduzione di tutto il documento PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugin di funzioni] Estrarre il titolo e il sommario dell'articolo PDF + tradurre l'intero testo (multithreading)
|
||||
[Assistente di Arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plugin di funzioni] Inserire l'URL dell'articolo di Arxiv e tradurre il sommario con un clic + scaricare il PDF
|
||||
[Assistente integrato di Google Scholar](https://www.bilibili.com/video/BV19L411U7ia) | [Plugin di funzioni] Con qualsiasi URL di pagina di ricerca di Google Scholar, lascia che GPT ti aiuti a scrivere il tuo [relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||
Aggregazione delle informazioni su Internet + GPT | [Plugin di funzioni] Fai in modo che GPT rilevi le informazioni su Internet prima di rispondere alle domande, senza mai diventare obsolete
|
||||
Visualizzazione di formule/img/tabelle | È possibile visualizzare un'equazione in forma [tex e render](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png) contemporaneamente, supporta equazioni e evidenziazione del codice
|
||||
Supporto per plugin di funzioni multithreading | Supporto per chiamata multithreaded di chatgpt, elaborazione con un clic di grandi quantità di testo o di un programma
|
||||
Avvia il tema di gradio [scuro](https://github.com/binary-husky/gpt_academic/issues/173) | Aggiungere ```/?__theme=dark``` dopo l'URL del browser per passare a un tema scuro
|
||||
Supporto per maggiori modelli LLM, supporto API2D | Sentirsi serviti simultaneamente da GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) deve essere una grande sensazione, giusto?
|
||||
Ulteriori modelli LLM supportat,i supporto per l'implementazione di Huggingface | Aggiunta di un'interfaccia Newbing (Nuovo Bing), introdotta la compatibilità con Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs), [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) e [PanGu-α](https://openi.org.cn/pangu/)
|
||||
Ulteriori dimostrazioni di nuove funzionalità (generazione di immagini, ecc.)... | Vedere la fine di questo documento...
|
||||
</div>
|
||||
|
||||
|
||||
- Nuova interfaccia (modificare l'opzione LAYOUT in `config.py` per passare dal layout a sinistra e a destra al layout superiore e inferiore)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
|
||||
</div>Sei un traduttore professionista di paper accademici.
|
||||
|
||||
- Tutti i pulsanti vengono generati dinamicamente leggendo il file functional.py, e aggiungerci nuove funzionalità è facile, liberando la clipboard.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- Revisione/Correzione
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- Se l'output contiene una formula, viene visualizzata sia come testo che come formula renderizzata, per facilitare la copia e la visualizzazione.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
|
||||
</div>
|
||||
|
||||
- Non hai tempo di leggere il codice del progetto? Passa direttamente a chatgpt e chiedi informazioni.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
|
||||
</div>
|
||||
|
||||
- Chiamata mista di vari modelli di lingua di grandi dimensioni (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
||||
</div>
|
||||
|
||||
---
|
||||
# Installazione
|
||||
## Installazione - Metodo 1: Esecuzione diretta (Windows, Linux o MacOS)
|
||||
|
||||
1. Scarica il progetto
|
||||
```sh
|
||||
git clone https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
```
|
||||
|
||||
2. Configura API_KEY
|
||||
|
||||
In `config.py`, configura la tua API KEY e altre impostazioni, [configs for special network environments](https://github.com/binary-husky/gpt_academic/issues/1).
|
||||
|
||||
(N.B. Quando il programma viene eseguito, verifica prima se esiste un file di configurazione privato chiamato `config_private.py` e sovrascrive le stesse configurazioni in `config.py`. Pertanto, se capisci come funziona la nostra logica di lettura della configurazione, ti consigliamo vivamente di creare un nuovo file di configurazione chiamato `config_private.py` accanto a `config.py`, e spostare (copiare) le configurazioni di `config.py` in `config_private.py`. 'config_private.py' non è sotto la gestione di git e può proteggere ulteriormente le tue informazioni personali. NB Il progetto supporta anche la configurazione della maggior parte delle opzioni tramite "variabili d'ambiente". La sintassi della variabile d'ambiente è descritta nel file `docker-compose`. Priorità di lettura: "variabili d'ambiente" > "config_private.py" > "config.py")
|
||||
|
||||
|
||||
3. Installa le dipendenze
|
||||
```sh
|
||||
# (Scelta I: se sei familiare con python) (python 3.9 o superiore, più nuovo è meglio), N.B.: utilizza il repository ufficiale pip o l'aliyun pip repository, metodo temporaneo per cambiare il repository: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
# (Scelta II: se non conosci Python) utilizza anaconda, il processo è simile (https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # crea l'ambiente anaconda
|
||||
conda activate gptac_venv # attiva l'ambiente anaconda
|
||||
python -m pip install -r requirements.txt # questo passaggio funziona allo stesso modo dell'installazione con pip
|
||||
```
|
||||
|
||||
<details><summary>Se si desidera supportare ChatGLM di Tsinghua/MOSS di Fudan come backend, fare clic qui per espandere</summary>
|
||||
<p>
|
||||
|
||||
【Passaggio facoltativo】 Se si desidera supportare ChatGLM di Tsinghua/MOSS di Fudan come backend, è necessario installare ulteriori dipendenze (prerequisiti: conoscenza di Python, esperienza con Pytorch e computer sufficientemente potente):
|
||||
```sh
|
||||
# 【Passaggio facoltativo I】 Supporto a ChatGLM di Tsinghua. Note su ChatGLM di Tsinghua: in caso di errore "Call ChatGLM fail 不能正常加载ChatGLM的参数" , fare quanto segue: 1. Per impostazione predefinita, viene installata la versione di torch + cpu; per usare CUDA, è necessario disinstallare torch e installare nuovamente torch + cuda; 2. Se non è possibile caricare il modello a causa di una configurazione insufficiente del computer, è possibile modificare la precisione del modello in request_llm/bridge_chatglm.py, cambiando AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) in AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
||||
python -m pip install -r request_llm/requirements_chatglm.txt
|
||||
|
||||
# 【Passaggio facoltativo II】 Supporto a MOSS di Fudan
|
||||
python -m pip install -r request_llm/requirements_moss.txt
|
||||
git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Si prega di notare che quando si esegue questa riga di codice, si deve essere nella directory radice del progetto
|
||||
|
||||
# 【Passaggio facoltativo III】 Assicurati che il file di configurazione config.py includa tutti i modelli desiderati, al momento tutti i modelli supportati sono i seguenti (i modelli della serie jittorllms attualmente supportano solo la soluzione docker):
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
```
|
||||
|
||||
</p>
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
4. Esegui
|
||||
```sh
|
||||
python main.py
|
||||
```5. Plugin di test delle funzioni
|
||||
```
|
||||
- Funzione plugin di test (richiede una risposta gpt su cosa è successo oggi in passato), puoi utilizzare questa funzione come template per implementare funzionalità più complesse
|
||||
Clicca su "[Demo del plugin di funzione] Oggi nella storia"
|
||||
```
|
||||
|
||||
## Installazione - Metodo 2: Utilizzo di Docker
|
||||
|
||||
1. Solo ChatGPT (consigliato per la maggior parte delle persone)
|
||||
|
||||
``` sh
|
||||
git clone https://github.com/binary-husky/gpt_academic.git # scarica il progetto
|
||||
cd gpt_academic # entra nel percorso
|
||||
nano config.py # con un qualsiasi editor di testo, modifica config.py configurando "Proxy", "API_KEY" e "WEB_PORT" (ad esempio 50923)
|
||||
docker build -t gpt-academic . # installa
|
||||
|
||||
#(ultimo passaggio - selezione 1) In un ambiente Linux, utilizzare '--net=host' è più conveniente e veloce
|
||||
docker run --rm -it --net=host gpt-academic
|
||||
#(ultimo passaggio - selezione 2) In un ambiente MacOS/Windows, l'opzione -p può essere utilizzata per esporre la porta del contenitore (ad es. 50923) alla porta della macchina
|
||||
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
||||
```
|
||||
|
||||
2. ChatGPT + ChatGLM + MOSS (richiede familiarità con Docker)
|
||||
|
||||
``` sh
|
||||
# Modifica docker-compose.yml, elimina i piani 1 e 3, mantieni il piano 2. Modifica la configurazione del piano 2 in docker-compose.yml, si prega di fare riferimento alle relative annotazioni
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
3. ChatGPT + LLAMA + Pangu + RWKV (richiede familiarità con Docker)
|
||||
|
||||
``` sh
|
||||
# Modifica docker-compose.yml, elimina i piani 1 e 2, mantieni il piano 3. Modifica la configurazione del piano 3 in docker-compose.yml, si prega di fare riferimento alle relative annotazioni
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
|
||||
## Installazione - Metodo 3: Altre modalità di distribuzione
|
||||
|
||||
1. Come utilizzare un URL di reindirizzamento / AzureAPI Cloud Microsoft
|
||||
Configura API_URL_REDIRECT seguendo le istruzioni nel file `config.py`.
|
||||
|
||||
2. Distribuzione su un server cloud remoto (richiede conoscenze ed esperienza di server cloud)
|
||||
Si prega di visitare [wiki di distribuzione-1] (https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
|
||||
3. Utilizzo di WSL2 (Windows Subsystem for Linux)
|
||||
Si prega di visitare [wiki di distribuzione-2] (https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
|
||||
4. Come far funzionare ChatGPT all'interno di un sottodominio (ad es. `http://localhost/subpath`)
|
||||
Si prega di visitare [Istruzioni per l'esecuzione con FastAPI] (docs/WithFastapi.md)
|
||||
|
||||
5. Utilizzo di docker-compose per l'esecuzione
|
||||
Si prega di leggere il file docker-compose.yml e seguire le istruzioni fornite.
|
||||
|
||||
---
|
||||
# Uso avanzato
|
||||
## Personalizzazione dei pulsanti / Plugin di funzione personalizzati
|
||||
|
||||
1. Personalizzazione dei pulsanti (scorciatoie accademiche)
|
||||
Apri `core_functional.py` con qualsiasi editor di testo e aggiungi la voce seguente, quindi riavvia il programma (se il pulsante è già stato aggiunto con successo e visibile, il prefisso e il suffisso supportano la modifica in tempo reale, senza bisogno di riavviare il programma).
|
||||
|
||||
ad esempio
|
||||
```
|
||||
"超级英译中": {
|
||||
# Prefisso, verrà aggiunto prima del tuo input. Ad esempio, descrivi la tua richiesta, come tradurre, spiegare il codice, correggere errori, ecc.
|
||||
"Prefix": "Per favore traduci questo testo in Cinese, e poi spiega tutti i termini tecnici nel testo con una tabella markdown:\n\n",
|
||||
|
||||
# Suffisso, verrà aggiunto dopo il tuo input. Ad esempio, con il prefisso puoi circondare il tuo input con le virgolette.
|
||||
"Suffix": "",
|
||||
},
|
||||
```
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. Plugin di funzione personalizzati
|
||||
|
||||
Scrivi plugin di funzione personalizzati e esegui tutte le attività che desideri o non hai mai pensato di fare.
|
||||
La difficoltà di scrittura e debug dei plugin del nostro progetto è molto bassa. Se si dispone di una certa conoscenza di base di Python, è possibile realizzare la propria funzione del plugin seguendo il nostro modello. Per maggiori dettagli, consultare la [guida al plugin per funzioni](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
||||
|
||||
---
|
||||
# Ultimo aggiornamento
|
||||
## Nuove funzionalità dinamiche
|
||||
|
||||
1. Funzionalità di salvataggio della conversazione. Nell'area dei plugin della funzione, fare clic su "Salva la conversazione corrente" per salvare la conversazione corrente come file html leggibile e ripristinabile, inoltre, nell'area dei plugin della funzione (menu a discesa), fare clic su "Carica la cronologia della conversazione archiviata" per ripristinare la conversazione precedente. Suggerimento: fare clic su "Carica la cronologia della conversazione archiviata" senza specificare il file consente di visualizzare la cache degli archivi html di cronologia, fare clic su "Elimina tutti i record di cronologia delle conversazioni locali" per eliminare tutte le cache degli archivi html.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. Generazione di rapporti. La maggior parte dei plugin genera un rapporto di lavoro dopo l'esecuzione.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
|
||||
</div>
|
||||
|
||||
3. Progettazione modulare delle funzioni, semplici interfacce ma in grado di supportare potenti funzionalità.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
|
||||
</div>
|
||||
|
||||
4. Questo è un progetto open source che può "tradursi da solo".
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="500" >
|
||||
</div>
|
||||
|
||||
5. Tradurre altri progetti open source è semplice.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="500" >
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="500" >
|
||||
</div>
|
||||
|
||||
6. Piccola funzione decorativa per [live2d](https://github.com/fghrsh/live2d_demo) (disattivata per impostazione predefinita, è necessario modificare `config.py`).
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500" >
|
||||
</div>
|
||||
|
||||
7. Supporto del grande modello linguistico MOSS
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500" >
|
||||
</div>
|
||||
|
||||
8. Generazione di immagini OpenAI
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
|
||||
</div>
|
||||
|
||||
9. Analisi e sintesi audio OpenAI
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
|
||||
</div>
|
||||
|
||||
10. Verifica completa dei testi in LaTeX
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" width="500" >
|
||||
</div>
|
||||
|
||||
|
||||
## Versione:
|
||||
- versione 3.5(Todo): utilizzo del linguaggio naturale per chiamare tutti i plugin di funzioni del progetto (alta priorità)
|
||||
- versione 3.4(Todo): supporto multi-threading per il grande modello linguistico locale Chatglm
|
||||
- versione 3.3: +funzionalità di sintesi delle informazioni su Internet
|
||||
- versione 3.2: i plugin di funzioni supportano più interfacce dei parametri (funzionalità di salvataggio della conversazione, lettura del codice in qualsiasi lingua + richiesta simultanea di qualsiasi combinazione di LLM)
|
||||
- versione 3.1: supporto per interrogare contemporaneamente più modelli gpt! Supporto api2d, bilanciamento del carico per più apikey
|
||||
- versione 3.0: supporto per Chatglm e altri piccoli LLM
|
||||
- versione 2.6: ristrutturazione della struttura del plugin, miglioramento dell'interattività, aggiunta di più plugin
|
||||
- versione 2.5: auto-aggiornamento, risoluzione del problema di testo troppo lungo e overflow del token durante la sintesi di grandi progetti di ingegneria
|
||||
- versione 2.4: (1) funzionalità di traduzione dell'intero documento in formato PDF aggiunta; (2) funzionalità di scambio dell'area di input aggiunta; (3) opzione di layout verticale aggiunta; (4) ottimizzazione della funzione di plugin multi-threading.
|
||||
- versione 2.3: miglioramento dell'interattività multi-threading
|
||||
- versione 2.2: i plugin di funzioni supportano l'hot-reload
|
||||
- versione 2.1: layout ripiegabile
|
||||
- versione 2.0: introduzione di plugin di funzioni modulari
|
||||
- versione 1.0: funzione di basegpt_academic sviluppatori gruppo QQ-2: 610599535
|
||||
|
||||
- Problemi noti
|
||||
- Alcuni plugin di traduzione del browser interferiscono con l'esecuzione del frontend di questo software
|
||||
- La versione di gradio troppo alta o troppo bassa può causare diversi malfunzionamenti
|
||||
|
||||
## Riferimenti e apprendimento
|
||||
|
||||
```
|
||||
Il codice fa riferimento a molte altre eccellenti progettazioni di progetti, principalmente:
|
||||
|
||||
# Progetto 1: ChatGLM-6B di Tsinghua:
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
|
||||
# Progetto 2: JittorLLMs di Tsinghua:
|
||||
https://github.com/Jittor/JittorLLMs
|
||||
|
||||
# Progetto 3: Edge-GPT:
|
||||
https://github.com/acheong08/EdgeGPT
|
||||
|
||||
# Progetto 4: ChuanhuChatGPT:
|
||||
https://github.com/GaiZhenbiao/ChuanhuChatGPT
|
||||
|
||||
# Progetto 5: ChatPaper:
|
||||
https://github.com/kaixindelele/ChatPaper
|
||||
|
||||
# Altro:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
```
|
||||
270
docs/README.md.Korean.md
Normal file
270
docs/README.md.Korean.md
Normal file
@@ -0,0 +1,270 @@
|
||||
> **노트**
|
||||
>
|
||||
> 의존성을 설치할 때는 반드시 requirements.txt에서 **지정된 버전**을 엄격하게 선택하십시오.
|
||||
>
|
||||
> `pip install -r requirements.txt`
|
||||
|
||||
# <img src="docs/logo.png" width="40" > GPT 학술 최적화 (GPT Academic)
|
||||
|
||||
**이 프로젝트가 마음에 드신다면 Star를 주세요. 추가로 유용한 학술 단축키나 기능 플러그인이 있다면 이슈나 pull request를 남기세요. 이 프로젝트에 대한 [영어 |](docs/README_EN.md)[일본어 |](docs/README_JP.md)[한국어 |](https://github.com/mldljyh/ko_gpt_academic)[러시아어 |](docs/README_RS.md)[프랑스어](docs/README_FR.md)로 된 README도 있습니다.
|
||||
GPT를 이용하여 프로젝트를 임의의 언어로 번역하려면 [`multi_language.py`](multi_language.py)를 읽고 실행하십시오. (실험적)
|
||||
|
||||
> **노트**
|
||||
>
|
||||
> 1. 파일을 읽기 위해 **빨간색**으로 표시된 기능 플러그인 (버튼) 만 지원됩니다. 일부 플러그인은 플러그인 영역의 **드롭다운 메뉴**에 있습니다. 또한 새로운 플러그인은 **가장 높은 우선순위**로 환영하며 처리합니다!
|
||||
>
|
||||
> 2. 이 프로젝트의 각 파일의 기능을 [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)에서 자세히 설명합니다. 버전이 업데이트 됨에 따라 관련된 기능 플러그인을 클릭하고 GPT를 호출하여 프로젝트의 자체 분석 보고서를 다시 생성할 수도 있습니다. 자주 묻는 질문은 [`위키`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)에서 볼 수 있습니다. [설치 방법](#installation).
|
||||
>
|
||||
> 3. 이 프로젝트는 국내 언어 모델 chatglm과 RWKV, 판고 등의 시도와 호환 가능합니다. 여러 개의 api-key를 지원하며 설정 파일에 "API_KEY="openai-key1,openai-key2,api2d-key3""와 같이 작성할 수 있습니다. `API_KEY`를 임시로 변경해야하는 경우 입력 영역에 임시 `API_KEY`를 입력 한 후 엔터 키를 누르면 즉시 적용됩니다.
|
||||
|
||||
<div align="center">
|
||||
|
||||
기능 | 설명
|
||||
--- | ---
|
||||
원 키워드 | 원 키워드 및 논문 문법 오류를 찾는 기능 지원
|
||||
한-영 키워드 | 한-영 키워드 지원
|
||||
코드 설명 | 코드 표시, 코드 설명, 코드 생성, 코드에 주석 추가
|
||||
[사용자 정의 바로 가기 키](https://www.bilibili.com/video/BV14s4y1E7jN) | 사용자 정의 바로 가기 키 지원
|
||||
모듈식 설계 | 강력한[함수 플러그인](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions) 지원, 플러그인이 [램 업데이트](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)를 지원합니다.
|
||||
[자체 프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [함수 플러그인] [원 키 우드] 프로젝트 소스 코드의 내용을 이해하는 기능을 제공
|
||||
[프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [함수 플러그인] 프로젝트 트리를 분석할 수 있습니다 (Python/C/C++/Java/Lua/...)
|
||||
논문 읽기, 번역 | [함수 플러그인] LaTex/PDF 논문의 전문을 읽고 요약을 생성합니다.
|
||||
LaTeX 텍스트[번역](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [원 키워드](https://www.bilibili.com/video/BV1FT411H7c5/) | [함수 플러그인] LaTeX 논문의 번역 또는 개량을 위해 일련의 모드를 번역할 수 있습니다.
|
||||
대량의 주석 생성 | [함수 플러그인] 함수 코멘트를 대량으로 생성할 수 있습니다.
|
||||
Markdown 한-영 번역 | [함수 플러그인] 위의 5 종 언어의 [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)를 볼 수 있습니다.
|
||||
chat 분석 보고서 생성 | [함수 플러그인] 수행 후 요약 보고서를 자동으로 생성합니다.
|
||||
[PDF 논문 번역](https://www.bilibili.com/video/BV1KT411x7Wn) | [함수 플러그인] PDF 논문이 제목 및 요약을 추출한 후 번역됩니다. (멀티 스레드)
|
||||
[Arxiv 도우미](https://www.bilibili.com/video/BV1LM4y1279X) | [함수 플러그인] Arxiv 논문 URL을 입력하면 요약을 번역하고 PDF를 다운로드 할 수 있습니다.
|
||||
[Google Scholar 통합 도우미](https://www.bilibili.com/video/BV19L411U7ia) | [함수 플러그인] Google Scholar 검색 페이지 URL을 제공하면 gpt가 [Related Works 작성](https://www.bilibili.com/video/BV1GP411U7Az/)을 도와줍니다.
|
||||
인터넷 정보 집계+GPT | [함수 플러그인] 먼저 GPT가 인터넷에서 정보를 수집하고 질문에 대답 할 수 있도록합니다. 정보가 절대적으로 구식이 아닙니다.
|
||||
수식/이미지/표 표시 | 급여, 코드 강조 기능 지원
|
||||
멀티 스레드 함수 플러그인 지원 | Chatgpt를 여러 요청에서 실행하여 [대량의 텍스트](https://www.bilibili.com/video/BV1FT411H7c5/) 또는 프로그램을 처리 할 수 있습니다.
|
||||
다크 그라디오 테마 시작 | 어둡게 주제를 변경하려면 브라우저 URL 끝에 ```/?__theme=dark```을 추가하면됩니다.
|
||||
[다중 LLM 모델](https://www.bilibili.com/video/BV1wT411p7yf) 지원, [API2D](https://api2d.com/) 인터페이스 지원됨 | GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS)가 모두 동시에 작동하는 것처럼 느낄 수 있습니다!
|
||||
LLM 모델 추가 및[huggingface 배치](https://huggingface.co/spaces/qingxu98/gpt-academic) 지원 | 새 Bing 인터페이스 (새 Bing) 추가, Clearing House [Jittorllms](https://github.com/Jittor/JittorLLMs) 지원 [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) 및 [盘古α](https://openi.org.cn/pangu/)
|
||||
기타 새로운 기능 (이미지 생성 등) ... | 이 문서의 끝부분을 참조하세요. ...- 모든 버튼은 functional.py를 동적으로 읽어와서 사용자 정의 기능을 자유롭게 추가할 수 있으며, 클립 보드를 해제합니다.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- 검수/오타 교정
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- 출력에 수식이 포함되어 있으면 텍스와 렌더링의 형태로 동시에 표시되어 복사 및 읽기가 용이합니다.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
|
||||
</div>
|
||||
|
||||
- 프로젝트 코드를 볼 시간이 없습니까? 전체 프로젝트를 chatgpt에 직접 표시하십시오
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
|
||||
</div>
|
||||
|
||||
- 다양한 대형 언어 모델 범용 요청 (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
||||
</div>
|
||||
|
||||
---
|
||||
# 설치
|
||||
## Installation-Method 1: Run directly (Windows, Linux or MacOS)
|
||||
|
||||
1. 프로젝트 다운로드
|
||||
```sh
|
||||
git clone https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
```
|
||||
|
||||
2. API_KEY 구성
|
||||
|
||||
`config.py`에서 API KEY 등 설정을 구성합니다. [특별한 네트워크 환경 설정](https://github.com/binary-husky/gpt_academic/issues/1) .
|
||||
|
||||
(P.S. 프로그램이 실행될 때, 이름이 `config_private.py`인 기밀 설정 파일이 있는지 우선적으로 확인하고 해당 설정으로 `config.py`의 동일한 이름의 설정을 덮어씁니다. 따라서 구성 읽기 논리를 이해할 수 있다면, `config.py` 옆에 `config_private.py`라는 새 구성 파일을 만들고 `config.py`의 구성을 `config_private.py`로 이동(복사)하는 것이 좋습니다. `config_private.py`는 git으로 관리되지 않으며 개인 정보를 더 안전하게 보호할 수 있습니다. P.S. 프로젝트는 또한 대부분의 옵션을 `환경 변수`를 통해 설정할 수 있으며, `docker-compose` 파일을 참조하여 환경 변수 작성 형식을 확인할 수 있습니다. 우선순위: `환경 변수` > `config_private.py` > `config.py`)
|
||||
|
||||
|
||||
3. 의존성 설치
|
||||
```sh
|
||||
# (I 선택: 기존 python 경험이 있다면) (python 버전 3.9 이상, 최신 버전이 좋습니다), 참고: 공식 pip 소스 또는 알리 pip 소스 사용, 일시적인 교체 방법: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
# (II 선택: Python에 익숙하지 않은 경우) anaconda 사용 방법은 비슷함(https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # anaconda 환경 만들기
|
||||
conda activate gptac_venv # anaconda 환경 활성화
|
||||
python -m pip install -r requirements.txt # 이 단계도 pip install의 단계와 동일합니다.
|
||||
```
|
||||
|
||||
<details><summary>추가지원을 위해 Tsinghua ChatGLM / Fudan MOSS를 사용해야하는 경우 지원을 클릭하여 이 부분을 확장하세요.</summary>
|
||||
<p>
|
||||
|
||||
[Tsinghua ChatGLM] / [Fudan MOSS]를 백엔드로 사용하려면 추가적인 종속성을 설치해야합니다 (전제 조건 : Python을 이해하고 Pytorch를 사용한 적이 있으며, 컴퓨터가 충분히 강력한 경우) :
|
||||
```sh
|
||||
# [선택 사항 I] Tsinghua ChatGLM을 지원합니다. Tsinghua ChatGLM에 대한 참고사항 : "Call ChatGLM fail cannot load ChatGLM parameters normally" 오류 발생시 다음 참조:
|
||||
# 1 : 기본 설치된 것들은 torch + cpu 버전입니다. cuda를 사용하려면 torch를 제거한 다음 torch + cuda를 다시 설치해야합니다.
|
||||
# 2 : 모델을 로드할 수 없는 기계 구성 때문에, AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)를
|
||||
# AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)로 변경합니다.
|
||||
python -m pip install -r request_llm/requirements_chatglm.txt
|
||||
|
||||
# [선택 사항 II] Fudan MOSS 지원
|
||||
python -m pip install -r request_llm/requirements_moss.txt
|
||||
git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # 다음 코드 줄을 실행할 때 프로젝트 루트 경로에 있어야합니다.
|
||||
|
||||
# [선택 사항III] AVAIL_LLM_MODELS config.py 구성 파일에 기대하는 모델이 포함되어 있는지 확인하십시오.
|
||||
# 현재 지원되는 전체 모델 :
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
```
|
||||
|
||||
</p>
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
4. 실행
|
||||
```sh
|
||||
python main.py
|
||||
```5. 테스트 함수 플러그인
|
||||
```
|
||||
- 테스트 함수 플러그인 템플릿 함수 (GPT에게 오늘의 역사에서 무슨 일이 일어났는지 대답하도록 요청)를 구현하는 데 사용할 수 있습니다. 이 함수를 기반으로 더 복잡한 기능을 구현할 수 있습니다.
|
||||
"[함수 플러그인 템플릿 데모] 오늘의 역사"를 클릭하세요.
|
||||
```
|
||||
|
||||
## 설치 - 방법 2 : 도커 사용
|
||||
|
||||
1. ChatGPT 만 (대부분의 사람들이 선택하는 것을 권장합니다.)
|
||||
|
||||
``` sh
|
||||
git clone https://github.com/binary-husky/gpt_academic.git # 다운로드
|
||||
cd gpt_academic # 경로 이동
|
||||
nano config.py # 아무 텍스트 에디터로 config.py를 열고 "Proxy","API_KEY","WEB_PORT" (예 : 50923) 등을 구성합니다.
|
||||
docker build -t gpt-academic . # 설치
|
||||
|
||||
#(마지막 단계-1 선택) Linux 환경에서는 --net=host를 사용하면 더 편리합니다.
|
||||
docker run --rm -it --net=host gpt-academic
|
||||
#(마지막 단계-2 선택) macOS / windows 환경에서는 -p 옵션을 사용하여 컨테이너의 포트 (예 : 50923)를 호스트의 포트로 노출해야합니다.
|
||||
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
||||
```
|
||||
|
||||
2. ChatGPT + ChatGLM + MOSS (Docker에 익숙해야합니다.)
|
||||
|
||||
``` sh
|
||||
#docker-compose.yml을 수정하여 계획 1 및 계획 3을 삭제하고 계획 2를 유지합니다. docker-compose.yml에서 계획 2의 구성을 수정하면 됩니다. 주석을 참조하십시오.
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
3. ChatGPT + LLAMA + Pangu + RWKV (Docker에 익숙해야합니다.)
|
||||
``` sh
|
||||
#docker-compose.yml을 수정하여 계획 1 및 계획 2을 삭제하고 계획 3을 유지합니다. docker-compose.yml에서 계획 3의 구성을 수정하면 됩니다. 주석을 참조하십시오.
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
|
||||
## 설치 - 방법 3 : 다른 배치 방법
|
||||
|
||||
1. 리버스 프록시 URL / Microsoft Azure API 사용 방법
|
||||
API_URL_REDIRECT를 `config.py`에 따라 구성하면됩니다.
|
||||
|
||||
2. 원격 클라우드 서버 배치 (클라우드 서버 지식과 경험이 필요합니다.)
|
||||
[배치위키-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)에 방문하십시오.
|
||||
|
||||
3. WSL2 사용 (Windows Subsystem for Linux 하위 시스템)
|
||||
[배치 위키-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)에 방문하십시오.
|
||||
|
||||
4. 2 차 URL (예 : `http : //localhost/subpath`)에서 실행하는 방법
|
||||
[FastAPI 실행 설명서] (docs / WithFastapi.md)를 참조하십시오.
|
||||
|
||||
5. docker-compose 실행
|
||||
docker-compose.yml을 읽은 후 지시 사항에 따라 작업하십시오.
|
||||
---
|
||||
# 고급 사용법
|
||||
## 사용자 정의 바로 가기 버튼 / 사용자 정의 함수 플러그인
|
||||
|
||||
1. 사용자 정의 바로 가기 버튼 (학술 바로 가기)
|
||||
임의의 텍스트 편집기로 'core_functional.py'를 엽니다. 엔트리 추가, 그런 다음 프로그램을 다시 시작하면됩니다. (버튼이 이미 추가되어 보이고 접두사, 접미사가 모두 변수가 효과적으로 수정되면 프로그램을 다시 시작하지 않아도됩니다.)
|
||||
예 :
|
||||
```
|
||||
"超级英译中": {
|
||||
# 접두사. 당신이 요구하는 것을 설명하는 데 사용됩니다. 예를 들어 번역, 코드를 설명, 다듬기 등
|
||||
"Prefix": "下面翻译成中文,然后用一个 markdown 表格逐一解释文中出现的专有名词:\n\n",
|
||||
|
||||
# 접미사는 입력 내용 앞뒤에 추가됩니다. 예를 들어 전위를 사용하여 입력 내용을 따옴표로 묶는데 사용할 수 있습니다.
|
||||
"Suffix": "",
|
||||
},
|
||||
```
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. 사용자 지정 함수 플러그인
|
||||
강력한 함수 플러그인을 작성하여 원하는 작업을 수행하십시오.
|
||||
이 프로젝트의 플러그인 작성 및 디버깅 난이도는 매우 낮으며, 일부 파이썬 기본 지식만 있으면 제공된 템플릿을 모방하여 플러그인 기능을 구현할 수 있습니다. 자세한 내용은 [함수 플러그인 가이드]를 참조하십시오. (https://github.com/binary -husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E 4%BB%B6%E6%8C%87%E5%8D%97).
|
||||
---
|
||||
# 최신 업데이트
|
||||
## 새로운 기능 동향1. 대화 저장 기능.
|
||||
|
||||
1. 함수 플러그인 영역에서 '현재 대화 저장'을 호출하면 현재 대화를 읽을 수 있고 복원 가능한 HTML 파일로 저장할 수 있습니다. 또한 함수 플러그인 영역(드롭다운 메뉴)에서 '대화 기록 불러오기'를 호출하면 이전 대화를 복원할 수 있습니다. 팁: 파일을 지정하지 않고 '대화 기록 불러오기'를 클릭하면 기록된 HTML 캐시를 볼 수 있으며 '모든 로컬 대화 기록 삭제'를 클릭하면 모든 HTML 캐시를 삭제할 수 있습니다.
|
||||
|
||||
2. 보고서 생성. 대부분의 플러그인은 실행이 끝난 후 작업 보고서를 생성합니다.
|
||||
|
||||
3. 모듈화 기능 설계, 간단한 인터페이스로도 강력한 기능을 지원할 수 있습니다.
|
||||
|
||||
4. 자체 번역이 가능한 오픈 소스 프로젝트입니다.
|
||||
|
||||
5. 다른 오픈 소스 프로젝트를 번역하는 것은 어렵지 않습니다.
|
||||
|
||||
6. [live2d](https://github.com/fghrsh/live2d_demo) 장식 기능(기본적으로 비활성화되어 있으며 `config.py`를 수정해야 합니다.)
|
||||
|
||||
7. MOSS 대 언어 모델 지원 추가
|
||||
|
||||
8. OpenAI 이미지 생성
|
||||
|
||||
9. OpenAI 음성 분석 및 요약
|
||||
|
||||
10. LaTeX 전체적인 교정 및 오류 수정
|
||||
|
||||
## 버전:
|
||||
- version 3.5 (TODO): 자연어를 사용하여 이 프로젝트의 모든 함수 플러그인을 호출하는 기능(우선순위 높음)
|
||||
- version 3.4(TODO): 로컬 대 모듈의 다중 스레드 지원 향상
|
||||
- version 3.3: 인터넷 정보 종합 기능 추가
|
||||
- version 3.2: 함수 플러그인이 더 많은 인수 인터페이스를 지원합니다.(대화 저장 기능, 임의의 언어 코드 해석 및 동시에 임의의 LLM 조합을 확인하는 기능)
|
||||
- version 3.1: 여러 개의 GPT 모델에 대한 동시 쿼리 지원! api2d 지원, 여러 개의 apikey 로드 밸런싱 지원
|
||||
- version 3.0: chatglm 및 기타 소형 llm의 지원
|
||||
- version 2.6: 플러그인 구조를 재구성하여 상호 작용성을 향상시켰습니다. 더 많은 플러그인을 추가했습니다.
|
||||
- version 2.5: 자체 업데이트, 전체 프로젝트를 요약할 때 텍스트가 너무 길어지고 토큰이 오버플로우되는 문제를 해결했습니다.
|
||||
- version 2.4: (1) PDF 전체 번역 기능 추가; (2) 입력 영역 위치 전환 기능 추가; (3) 수직 레이아웃 옵션 추가; (4) 다중 스레드 함수 플러그인 최적화.
|
||||
- version 2.3: 다중 스레드 상호 작용성 강화
|
||||
- version 2.2: 함수 플러그인 히트 리로드 지원
|
||||
- version 2.1: 접는 레이아웃 지원
|
||||
- version 2.0: 모듈화 함수 플러그인 도입
|
||||
- version 1.0: 기본 기능
|
||||
|
||||
gpt_academic 개발자 QQ 그룹-2 : 610599535
|
||||
|
||||
- 알려진 문제
|
||||
- 일부 브라우저 번역 플러그인이이 소프트웨어의 프론트 엔드 작동 방식을 방해합니다.
|
||||
- gradio 버전이 너무 높거나 낮으면 여러 가지 이상이 발생할 수 있습니다.
|
||||
|
||||
## 참고 및 학습 자료
|
||||
|
||||
```
|
||||
많은 우수 프로젝트의 디자인을 참고했습니다. 주요 항목은 다음과 같습니다.
|
||||
|
||||
# 프로젝트 1 : Tsinghua ChatGLM-6B :
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
|
||||
# 프로젝트 2 : Tsinghua JittorLLMs:
|
||||
https://github.com/Jittor/JittorLLMs
|
||||
|
||||
# 프로젝트 3 : Edge-GPT :
|
||||
https://github.com/acheong08/EdgeGPT
|
||||
|
||||
# 프로젝트 4 : ChuanhuChatGPT:
|
||||
https://github.com/GaiZhenbiao/ChuanhuChatGPT
|
||||
|
||||
# 프로젝트 5 : ChatPaper :
|
||||
https://github.com/kaixindelele/ChatPaper
|
||||
|
||||
# 더 많은 :
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
```
|
||||
324
docs/README.md.Portuguese.md
Normal file
324
docs/README.md.Portuguese.md
Normal file
@@ -0,0 +1,324 @@
|
||||
> **Nota**
|
||||
>
|
||||
> Ao instalar as dependências, por favor, selecione rigorosamente as versões **especificadas** no arquivo requirements.txt.
|
||||
>
|
||||
> `pip install -r requirements.txt`
|
||||
>
|
||||
|
||||
# <img src="logo.png" width="40" > Otimização acadêmica GPT (GPT Academic)
|
||||
|
||||
**Se você gostou deste projeto, por favor dê um Star. Se você criou atalhos acadêmicos mais úteis ou plugins funcionais, sinta-se livre para abrir uma issue ou pull request. Nós também temos um README em [Inglês|](README_EN.md)[日本語|](README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](README_RS.md)[Français](README_FR.md) traduzidos por este próprio projeto.
|
||||
Para traduzir este projeto para qualquer idioma com o GPT, leia e execute [`multi_language.py`](multi_language.py) (experimental).
|
||||
|
||||
> **Nota**
|
||||
>
|
||||
> 1. Por favor, preste atenção que somente os plugins de funções (botões) com a cor **vermelha** podem ler arquivos. Alguns plugins estão localizados no **menu suspenso** na área de plugins. Além disso, nós damos as boas-vindas com a **maior prioridade** e gerenciamos quaisquer novos plugins PR!
|
||||
>
|
||||
> 2. As funções de cada arquivo neste projeto são detalhadas em [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A), auto-análises do projeto geradas pelo GPT também estão podem ser chamadas a qualquer momento ao clicar nos plugins relacionados. As perguntas frequentes estão resumidas no [`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Instruções de Instalação](#installation).
|
||||
>
|
||||
> 3. Este projeto é compatível com e incentiva o uso de modelos de linguagem nacionais, como chatglm e RWKV, Pangolin, etc. Suporta a coexistência de várias chaves de API e pode ser preenchido no arquivo de configuração como `API_KEY="openai-key1,openai-key2,api2d-key3"`. Quando precisar alterar temporariamente o `API_KEY`, basta digitar o `API_KEY` temporário na área de entrada e pressionar Enter para que ele entre em vigor.
|
||||
|
||||
<div align="center">
|
||||
|
||||
Funcionalidade | Descrição
|
||||
--- | ---
|
||||
Um clique de polimento | Suporte a um clique polimento, um clique encontrar erros de gramática no artigo
|
||||
Tradução chinês-inglês de um clique | Tradução chinês-inglês de um clique
|
||||
Explicação de código de um único clique | Exibir código, explicar código, gerar código, adicionar comentários ao código
|
||||
[Teclas de atalho personalizadas](https://www.bilibili.com/video/BV14s4y1E7jN) | Suporte a atalhos personalizados
|
||||
Projeto modular | Suporte para poderosos plugins[de função personalizada](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), os plugins suportam[hot-reload](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
[Análise automática do programa](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin de função][um clique para entender](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) o código-fonte do projeto
|
||||
[Análise do programa](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin de função] Um clique pode analisar a árvore de projetos do Python/C/C++/Java/Lua/...
|
||||
Leitura de artigos, [tradução](https://www.bilibili.com/video/BV1KT411x7Wn) de artigos | [Plugin de função] um clique para interpretar o resumo de artigos LaTeX/PDF e gerar resumo
|
||||
Tradução completa LATEX, polimento|[Plugin de função] Uma clique para traduzir ou polir um artigo LATEX
|
||||
Geração em lote de comentários | [Plugin de função] Um clique gera comentários de função em lote
|
||||
[Tradução chinês-inglês](https://www.bilibili.com/video/BV1yo4y157jV/) markdown | [Plugin de função] Você viu o README em 5 linguagens acima?
|
||||
Relatório de análise de chat | [Plugin de função] Gera automaticamente um resumo após a execução
|
||||
[Funcionalidade de tradução de artigos completos em PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugin de função] Extrai o título e o resumo do artigo PDF e traduz o artigo completo (multithread)
|
||||
Assistente arXiv | [Plugin de função] Insira o url do artigo arXiv para traduzir o resumo + baixar PDF
|
||||
Assistente de integração acadêmica do Google | [Plugin de função] Dê qualquer URL de página de pesquisa acadêmica do Google e deixe o GPT escrever[trabalhos relacionados](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||
Agregação de informações da Internet + GPT | [Plugin de função] Um clique para obter informações do GPT através da Internet e depois responde a perguntas para informações nunca ficarem desatualizadas
|
||||
Exibição de fórmulas/imagem/tabela | Pode exibir simultaneamente a forma de renderização e[TEX] das fórmulas, suporte a fórmulas e realce de código
|
||||
Suporte de plugins de várias linhas | Suporte a várias chamadas em linha do chatgpt, um clique para processamento[de massa de texto](https://www.bilibili.com/video/BV1FT411H7c5/) ou programa
|
||||
Tema gradio escuro | Adicione ``` /?__theme=dark``` ao final da url do navegador para ativar o tema escuro
|
||||
[Suporte para vários modelos LLM](https://www.bilibili.com/video/BV1wT411p7yf), suporte para a nova interface API2D | A sensação de ser atendido simultaneamente por GPT3.5, GPT4, [Chatglm THU](https://github.com/THUDM/ChatGLM-6B), [Moss Fudan](https://github.com/OpenLMLab/MOSS) deve ser ótima, certo?
|
||||
Mais modelos LLM incorporados, suporte para a implantação[huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Adicione interface Newbing (New Bing), suporte [JittorLLMs](https://github.com/Jittor/JittorLLMs) THU Introdução ao suporte do LLaMA, RWKV e Pan Gu Alpha
|
||||
Mais recursos novos mostrados (geração de imagens, etc.) ... | Consulte o final deste documento ...
|
||||
|
||||
</div>
|
||||
|
||||
- Nova interface (Modifique a opção LAYOUT em `config.py` para alternar entre o layout esquerdo/direito e o layout superior/inferior)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
|
||||
</div>- All buttons are dynamically generated by reading functional.py, and you can add custom functions at will, liberating the clipboard
|
||||
|
||||
<div align="center">
|
||||
<img src = "https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700">
|
||||
</div>
|
||||
|
||||
- Proofreading/errors correction
|
||||
|
||||
|
||||
<div align="center">
|
||||
<img src = "https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700">
|
||||
</div>
|
||||
|
||||
- If the output contains formulas, it will be displayed in both tex and rendering format at the same time, which is convenient for copying and reading
|
||||
|
||||
|
||||
<div align="center">
|
||||
<img src = "https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700">
|
||||
</div>
|
||||
|
||||
- Don't want to read the project code? Just show the whole project to chatgpt
|
||||
|
||||
|
||||
<div align="center">
|
||||
<img src = "https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700">
|
||||
</div>
|
||||
|
||||
- Mix the use of multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
|
||||
|
||||
|
||||
<div align="center">
|
||||
<img src = "https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700">
|
||||
</div>
|
||||
|
||||
---
|
||||
# Instalação
|
||||
## Installation-Method 1: Run directly (Windows, Linux or MacOS)
|
||||
|
||||
1. Download the project
|
||||
|
||||
```sh
|
||||
git clone https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
```
|
||||
|
||||
2. Configure the API KEY
|
||||
|
||||
In `config.py`, configure API KEY and other settings, [Special Network Environment Settings] (https://github.com/binary-husky/gpt_academic/issues/1).
|
||||
|
||||
(P.S. When the program runs, it will first check whether there is a private configuration file named `config_private.py`, and use the configuration in it to cover the configuration with the same name in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`. The writing format of environment variables is referenced to the `docker-compose` file. Reading priority: `environment variable` > `config_private.py` > `config.py`)
|
||||
|
||||
|
||||
3. Install dependencies
|
||||
|
||||
```sh
|
||||
# (Option I: for those familiar with python)(python version is 3.9 or above, the newer the better), note: use the official pip source or the Alibaba pip source. Temporary solution for changing source: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
# (Option II: for those who are unfamiliar with python) use anaconda, the steps are also similar (https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # create anaconda environment
|
||||
conda activate gptac_venv # activate anaconda environment
|
||||
python -m pip install -r requirements.txt # This step is the same as the pip installation step
|
||||
```
|
||||
|
||||
<details><summary>If you need to support Tsinghua ChatGLM / Fudan MOSS as the backend, click to expand here</summary>
|
||||
<p>
|
||||
|
||||
[Optional Step] If you need to support Tsinghua ChatGLM / Fudan MOSS as the backend, you need to install more dependencies (prerequisite: familiar with Python + used Pytorch + computer configuration is strong):
|
||||
```sh
|
||||
# 【Optional Step I】support Tsinghua ChatGLM。Tsinghua ChatGLM Note: If you encounter a "Call ChatGLM fails cannot load ChatGLM parameters normally" error, refer to the following: 1: The default installed is torch+cpu version, and using cuda requires uninstalling torch and reinstalling torch+cuda; 2: If the model cannot be loaded due to insufficient computer configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
||||
python -m pip install -r request_llm/requirements_chatglm.txt
|
||||
|
||||
# 【Optional Step II】support Fudan MOSS
|
||||
python -m pip install -r request_llm/requirements_moss.txt
|
||||
git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note: When executing this line of code, you must be in the project root path
|
||||
|
||||
# 【Optional Step III】Make sure that the AVAIL_LLM_MODELS in the config.py configuration file contains the expected model. Currently, all supported models are as follows (jittorllms series currently only supports docker solutions):
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
```
|
||||
|
||||
</p>
|
||||
</details>
|
||||
|
||||
|
||||
4. Run
|
||||
|
||||
```sh
|
||||
python main.py
|
||||
```5. Plugin de Função de Teste
|
||||
```
|
||||
- Função de modelo de plug-in de teste (exige que o GPT responda ao que aconteceu hoje na história), você pode usar esta função como modelo para implementar funções mais complexas
|
||||
Clique em "[Função de plug-in de modelo de demonstração] O que aconteceu hoje na história?"
|
||||
```
|
||||
|
||||
## Instalação - Método 2: Usando o Docker
|
||||
|
||||
1. Apenas ChatGPT (recomendado para a maioria das pessoas)
|
||||
|
||||
``` sh
|
||||
git clone https://github.com/binary-husky/gpt_academic.git # Baixar o projeto
|
||||
cd gpt_academic # Entrar no caminho
|
||||
nano config.py # Editar config.py com qualquer editor de texto configurando "Proxy", "API_KEY" e "WEB_PORT" (por exemplo, 50923), etc.
|
||||
docker build -t gpt-academic . # Instale
|
||||
|
||||
# (Ùltima etapa - escolha 1) Dentro do ambiente Linux, é mais fácil e rápido usar `--net=host`
|
||||
docker run --rm -it --net=host gpt-academic
|
||||
# (Última etapa - escolha 2) Em ambientes macOS/windows, você só pode usar a opção -p para expor a porta do contêiner (por exemplo, 50923) para a porta no host
|
||||
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
||||
```
|
||||
|
||||
2. ChatGPT + ChatGLM + MOSS (conhecimento de Docker necessário)
|
||||
|
||||
``` sh
|
||||
# Edite o arquivo docker-compose.yml, remova as soluções 1 e 3, mantenha a solução 2, e siga as instruções nos comentários do arquivo
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
3. ChatGPT + LLAMA + Pangu + RWKV (conhecimento de Docker necessário)
|
||||
``` sh
|
||||
# Edite o arquivo docker-compose.yml, remova as soluções 1 e 2, mantenha a solução 3, e siga as instruções nos comentários do arquivo
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
|
||||
## Instalação - Método 3: Outros Métodos de Implantação
|
||||
|
||||
1. Como usar URLs de proxy inverso/microsoft Azure API
|
||||
Basta configurar o API_URL_REDIRECT de acordo com as instruções em `config.py`.
|
||||
|
||||
2. Implantação em servidores em nuvem remotos (requer conhecimento e experiência de servidores em nuvem)
|
||||
Acesse [Wiki de implementação remota do servidor em nuvem](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
|
||||
3. Usando a WSL2 (sub-sistema do Windows para Linux)
|
||||
Acesse [Wiki da implantação da WSL2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
|
||||
4. Como executar em um subdiretório (ex. `http://localhost/subpath`)
|
||||
Acesse [Instruções de execução FastAPI](docs/WithFastapi.md)
|
||||
|
||||
5. Execute usando o docker-compose
|
||||
Leia o arquivo docker-compose.yml e siga as instruções.
|
||||
|
||||
# Uso Avançado
|
||||
## Customize novos botões de acesso rápido / plug-ins de função personalizados
|
||||
|
||||
1. Personalizar novos botões de acesso rápido (atalhos acadêmicos)
|
||||
Abra `core_functional.py` em qualquer editor de texto e adicione os seguintes itens e reinicie o programa (Se o botão já foi adicionado e pode ser visto, prefixos e sufixos são compatíveis com modificações em tempo real e não exigem reinício do programa para ter efeito.)
|
||||
Por exemplo,
|
||||
```
|
||||
"Super Eng:": {
|
||||
# Prefixo, será adicionado antes da sua entrada. Por exemplo, para descrever sua solicitação, como tradução, explicação de código, polimento, etc.
|
||||
"Prefix": "Por favor, traduza o seguinte conteúdo para chinês e use uma tabela em Markdown para explicar termos próprios no texto: \n \n",
|
||||
|
||||
# Sufixo, será adicionado após a sua entrada. Por exemplo, emparelhado com o prefixo, pode colocar sua entrada entre aspas.
|
||||
"Suffix": "",
|
||||
},
|
||||
```
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. Personalizar plug-ins de função
|
||||
|
||||
Escreva plug-ins de função poderosos para executar tarefas que você deseja e não pensava possível.
|
||||
A dificuldade geral de escrever e depurar plug-ins neste projeto é baixa e, se você tem algum conhecimento básico de python, pode implementar suas próprias funções sobre o modelo que fornecemos.
|
||||
Para mais detalhes, consulte o [Guia do plug-in de função.](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
||||
|
||||
---
|
||||
# Última atualização
|
||||
## Novas funções dinâmicas.
|
||||
|
||||
1. Função de salvamento de diálogo. Ao chamar o plug-in de função "Salvar diálogo atual", é possível salvar o diálogo atual em um arquivo html legível e reversível. Além disso, ao chamar o plug-in de função "Carregar arquivo de histórico de diálogo" no menu suspenso da área de plug-in, é possível restaurar uma conversa anterior. Dica: clicar em "Carregar arquivo de histórico de diálogo" sem especificar um arquivo permite visualizar o cache do arquivo html de histórico. Clicar em "Excluir todo o registro de histórico de diálogo local" permite excluir todo o cache de arquivo html.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500" >
|
||||
</div>
|
||||
|
||||
|
||||
2. Geração de relatório. A maioria dos plug-ins gera um relatório de trabalho após a conclusão da execução.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
|
||||
</div>
|
||||
|
||||
3. Design modular de funcionalidades, com interfaces simples, mas suporte a recursos poderosos
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
|
||||
</div>
|
||||
|
||||
4. Este é um projeto de código aberto que é capaz de "auto-traduzir-se".
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="500" >
|
||||
</div>
|
||||
|
||||
5. A tradução de outros projetos de código aberto é simples.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="500" >
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="500" >
|
||||
</div>
|
||||
|
||||
6. Recursos decorativos para o [live2d](https://github.com/fghrsh/live2d_demo) (desativados por padrão, é necessário modificar o arquivo `config.py`)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500" >
|
||||
</div>
|
||||
|
||||
7. Suporte ao modelo de linguagem MOSS
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500" >
|
||||
</div>
|
||||
|
||||
8. Geração de imagens pelo OpenAI
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
|
||||
</div>
|
||||
|
||||
9. Análise e resumo de áudio pelo OpenAI
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
|
||||
</div>
|
||||
|
||||
10. Revisão e correção de erros de texto em Latex.
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" width="500" >
|
||||
</div>
|
||||
|
||||
## Versão:
|
||||
- Versão 3.5(Todo): Usar linguagem natural para chamar todas as funções do projeto (prioridade alta)
|
||||
- Versão 3.4(Todo): Melhorar o suporte à multithread para o chatglm local
|
||||
- Versão 3.3: +Funções integradas de internet
|
||||
- Versão 3.2: Suporte a mais interfaces de parâmetros de plug-in (função de salvar diálogo, interpretação de códigos de várias linguagens, perguntas de combinações LLM arbitrárias ao mesmo tempo)
|
||||
- Versão 3.1: Suporte a perguntas a vários modelos de gpt simultaneamente! Suporte para api2d e balanceamento de carga para várias chaves api
|
||||
- Versão 3.0: Suporte ao chatglm e outros LLMs de pequeno porte
|
||||
- Versão 2.6: Refatoração da estrutura de plug-in, melhoria da interatividade e adição de mais plug-ins
|
||||
- Versão 2.5: Autoatualização, resolvendo problemas de token de texto excessivamente longo e estouro ao compilar grandes projetos
|
||||
- Versão 2.4: (1) Adição de funcionalidade de tradução de texto completo em PDF; (2) Adição de funcionalidade de mudança de posição da área de entrada; (3) Adição de opção de layout vertical; (4) Otimização de plug-ins de multithread.
|
||||
- Versão 2.3: Melhoria da interatividade de multithread
|
||||
- Versão 2.2: Suporte à recarga a quente de plug-ins
|
||||
- Versão 2.1: Layout dobrável
|
||||
- Versão 2.0: Introdução de plug-ins de função modular
|
||||
- Versão 1.0: Funcionalidades básicasgpt_academic desenvolvedores QQ grupo-2: 610599535
|
||||
|
||||
- Problemas conhecidos
|
||||
- Extensões de tradução de alguns navegadores podem interferir na execução do front-end deste software
|
||||
- Uma versão muito alta ou muito baixa do Gradio pode causar vários erros
|
||||
|
||||
## Referências e Aprendizado
|
||||
|
||||
```
|
||||
Foi feita referência a muitos projetos excelentes em código, principalmente:
|
||||
|
||||
# Projeto1: ChatGLM-6B da Tsinghua:
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
|
||||
# Projeto2: JittorLLMs da Tsinghua:
|
||||
https://github.com/Jittor/JittorLLMs
|
||||
|
||||
# Projeto3: Edge-GPT:
|
||||
https://github.com/acheong08/EdgeGPT
|
||||
|
||||
# Projeto4: ChuanhuChatGPT:
|
||||
https://github.com/GaiZhenbiao/ChuanhuChatGPT
|
||||
|
||||
# Projeto5: ChatPaper:
|
||||
https://github.com/kaixindelele/ChatPaper
|
||||
|
||||
# Mais:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
```
|
||||
@@ -2,204 +2,195 @@
|
||||
>
|
||||
> This English README is automatically generated by the markdown translation plugin in this project, and may not be 100% correct.
|
||||
>
|
||||
|
||||
# <img src="logo.png" width="40" > ChatGPT Academic Optimization
|
||||
|
||||
**If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a [README in English](docs/README_EN.md) translated by this project itself.**
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> 1. Please note that only **functions with red color** supports reading files, some functions are located in the **dropdown menu** of plugins. Additionally, we welcome and prioritize any new plugin PRs with **highest priority**!
|
||||
>
|
||||
> 2. The functionality of each file in this project is detailed in the self-translation report [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the project. With the iteration of the version, you can also click on the relevant function plugins at any time to call GPT to regenerate the self-analysis report of the project. The FAQ summary is in the [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98) section.
|
||||
> When installing dependencies, **please strictly select the versions** specified in requirements.txt.
|
||||
>
|
||||
> `pip install -r requirements.txt`
|
||||
|
||||
# GPT Academic Optimization (GPT Academic)
|
||||
|
||||
**If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request.
|
||||
To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).**
|
||||
|
||||
> Note:
|
||||
>
|
||||
> 1. Please note that only the function plugins (buttons) marked in **red** support reading files. Some plugins are in the **drop-down menu** in the plugin area. We welcome and process any new plugins with the **highest priority**!
|
||||
> 2. The function of each file in this project is detailed in the self-translation analysis [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). With version iteration, you can also click on related function plugins at any time to call GPT to regenerate the project's self-analysis report. Common questions are summarized in the [`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installation method](#installation).
|
||||
> 3. This project is compatible with and encourages trying domestic large language models such as chatglm, RWKV, Pangu, etc. Multiple API keys are supported and can be filled in the configuration file like `API_KEY="openai-key1,openai-key2,api2d-key3"`. When temporarily changing `API_KEY`, enter the temporary `API_KEY` in the input area and press enter to submit, which will take effect.
|
||||
|
||||
<div align="center">
|
||||
|
||||
Function | Description
|
||||
--- | ---
|
||||
One-Click Polish | Supports one-click polishing and finding grammar errors in academic papers.
|
||||
One-Key Translation Between Chinese and English | One-click translation between Chinese and English.
|
||||
One-Key Code Interpretation | Can correctly display and interpret code.
|
||||
[Custom Shortcut Keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys.
|
||||
[Configure Proxy Server](https://www.bilibili.com/video/BV1rc411W7Dr) | Supports configuring proxy servers.
|
||||
Modular Design | Supports custom high-order function plugins and [function plugins], and plugins support [hot updates](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
||||
[Self-programming Analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plugin] [One-Key Read] (https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) The source code of this project is analyzed.
|
||||
[Program Analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plugin] One-click can analyze the project tree of other Python/C/C++/Java/Lua/... projects
|
||||
Read the Paper | [Function Plugin] One-click interpretation of the full text of latex paper and generation of abstracts
|
||||
Latex Full Text Translation, Proofreading | [Function Plugin] One-click translation or proofreading of latex papers.
|
||||
Batch Comment Generation | [Function Plugin] One-click batch generation of function comments
|
||||
Chat Analysis Report Generation | [Function Plugin] After running, an automatic summary report will be generated
|
||||
[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function Plugin] Enter the arxiv article url to translate the abstract and download the PDF with one click
|
||||
[Full-text Translation Function of PDF Paper](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function Plugin] Extract the title & abstract of the PDF paper + translate the full text (multithreading)
|
||||
[Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function Plugin] Given any Google Scholar search page URL, let gpt help you choose interesting articles.
|
||||
Formula / Picture / Table Display | Can display both the tex form and the rendering form of formulas at the same time, support formula and code highlighting
|
||||
Multithreaded Function Plugin Support | Supports multi-threaded calling chatgpt, one-click processing of massive text or programs
|
||||
Start Dark Gradio [Theme](https://github.com/binary-husky/chatgpt_academic/issues/173) | Add ```/?__dark-theme=true``` at the end of the browser url to switch to dark theme
|
||||
[Multiple LLM Models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | It must feel nice to be served by both GPT3.5, GPT4, and [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B)!
|
||||
Huggingface non-Science Net [Online Experience](https://huggingface.co/spaces/qingxu98/gpt-academic) | After logging in to huggingface, copy [this space](https://huggingface.co/spaces/qingxu98/gpt-academic)
|
||||
... | ...
|
||||
|
||||
One-click polishing | Supports one-click polishing and one-click searching for grammar errors in papers.
|
||||
One-click Chinese-English translation | One-click Chinese-English translation.
|
||||
One-click code interpretation | Displays, explains, generates, and adds comments to code.
|
||||
[Custom shortcut keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys.
|
||||
Modular design | Supports custom powerful [function plug-ins](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), plug-ins support [hot update](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
||||
[Self-program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] [One-click understanding](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the source code of this project
|
||||
[Program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] One-click profiling of other project trees in Python/C/C++/Java/Lua/...
|
||||
Reading papers, [translating](https://www.bilibili.com/video/BV1KT411x7Wn) papers | [Function Plug-in] One-click interpretation of latex/pdf full-text papers and generation of abstracts.
|
||||
Latex full-text [translation](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [polishing](https://www.bilibili.com/video/BV1FT411H7c5/) | [Function plug-in] One-click translation or polishing of latex papers.
|
||||
Batch annotation generation | [Function plug-in] One-click batch generation of function annotations.
|
||||
Markdown [Chinese-English translation](https://www.bilibili.com/video/BV1yo4y157jV/) | [Function plug-in] Have you seen the [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) in the five languages above?
|
||||
Chat analysis report generation | [Function plug-in] Automatically generate summary reports after running.
|
||||
[PDF full-text translation function](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function plug-in] PDF paper extract title & summary + translate full text (multi-threaded)
|
||||
[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function plug-in] Enter the arxiv article url and you can translate abstracts and download PDFs with one click.
|
||||
[Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function plug-in] Given any Google Scholar search page URL, let GPT help you [write relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||
Internet information aggregation+GPT | [Function plug-in] One-click [let GPT get information from the Internet first](https://www.bilibili.com/video/BV1om4y127ck), then answer questions, and let the information never be outdated.
|
||||
Formula/image/table display | Can display formulas in both [tex form and render form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), support formulas and code highlighting.
|
||||
Multi-threaded function plug-in support | Supports multi-threaded calling of chatgpt, and can process [massive text](https://www.bilibili.com/video/BV1FT411H7c5/) or programs with one click.
|
||||
Start Dark Gradio [theme](https://github.com/binary-husky/gpt_academic/issues/173) | Add ```/?__theme=dark``` after the browser URL to switch to the dark theme.
|
||||
[Multiple LLM models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | The feeling of being served by GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), and [Fudan MOSS](https://github.com/OpenLMLab/MOSS) at the same time must be great, right?
|
||||
More LLM model access, support [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Add Newbing interface (New Bing), introduce Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs) to support [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) and [Panguα](https://openi.org.cn/pangu/)
|
||||
More new feature displays (image generation, etc.)…… | See the end of this document for more...
|
||||
</div>
|
||||
|
||||
|
||||
- New interface (switch between "left-right layout" and "up-down layout" by modifying the LAYOUT option in config.py)
|
||||
- New interface (modify the LAYOUT option in `config.py` to switch between "left and right layout" and "up and down layout")
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
|
||||
</div>
|
||||
|
||||
|
||||
- All buttons are dynamically generated by reading functional.py and can add custom functionality at will, freeing up clipboard
|
||||
</div>- All buttons are dynamically generated by reading `functional.py`, and you can add custom functions freely to unleash the power of clipboard.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- Proofreading / correcting
|
||||
- polishing/correction
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- If the output contains formulas, it will be displayed in both the tex form and the rendering form at the same time, which is convenient for copying and reading
|
||||
- If the output contains formulas, they will be displayed in both `tex` and render form, making it easy to copy and read.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
|
||||
</div>
|
||||
|
||||
- Don't want to read the project code? Just take the whole project to chatgpt
|
||||
- Tired of reading the project code? ChatGPT can explain it all.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
|
||||
</div>
|
||||
|
||||
- Multiple major language model mixing calls (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
|
||||
- Multiple large language models are mixed, such as ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
||||
</div>
|
||||
|
||||
Multiple major language model mixing call [huggingface beta version](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (the huggingface version does not support chatglm)
|
||||
|
||||
|
||||
---
|
||||
# Installation
|
||||
## Method 1: Directly running (Windows, Linux or MacOS)
|
||||
|
||||
## Installation-Method 1: Run directly (Windows, Linux or MacOS)
|
||||
|
||||
1. Download project
|
||||
1. Download the project
|
||||
```sh
|
||||
git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
cd chatgpt_academic
|
||||
git clone https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
```
|
||||
|
||||
2. Configure API_KEY and proxy settings
|
||||
2. Configure the API_KEY
|
||||
|
||||
Configure the API KEY in `config.py`, [special network environment settings](https://github.com/binary-husky/gpt_academic/issues/1).
|
||||
|
||||
(P.S. When the program is running, it will first check if there is a private configuration file named `config_private.py` and use the configurations in it to override the same configurations in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configurations in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your private information more secure. P.S. The project also supports configuring most options through `environment variables`. Please refer to the format of `docker-compose` file when writing. Reading priority: `environment variables` > `config_private.py` > `config.py`)
|
||||
|
||||
|
||||
In `config.py`, configure the overseas Proxy and OpenAI API KEY as follows:
|
||||
```
|
||||
1. If you are in China, you need to set up an overseas proxy to use the OpenAI API smoothly. Please read config.py carefully for setup details (1. Modify USE_PROXY to True; 2. Modify proxies according to the instructions).
|
||||
2. Configure the OpenAI API KEY. You need to register and obtain an API KEY on the OpenAI website. Once you get the API KEY, you can configure it in the config.py file.
|
||||
3. Issues related to proxy networks (network timeouts, proxy failures) are summarized at https://github.com/binary-husky/chatgpt_academic/issues/1
|
||||
```
|
||||
(P.S. When the program runs, it will first check whether there is a private configuration file named `config_private.py` and use the same-name configuration in `config.py` to overwrite it. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configuration in `config.py` to` config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure.))
|
||||
|
||||
|
||||
3. Install dependencies
|
||||
3. Install the dependencies
|
||||
```sh
|
||||
# (Option One) Recommended
|
||||
# (Option I: If familiar with python) (python version 3.9 or above, the newer the better), note: use official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
# (Option Two) If you use anaconda, the steps are similar:
|
||||
# (Option Two.1) conda create -n gptac_venv python=3.11
|
||||
# (Option Two.2) conda activate gptac_venv
|
||||
# (Option Two.3) python -m pip install -r requirements.txt
|
||||
|
||||
# Note: Use official pip source or Ali pip source. Other pip sources (such as some university pips) may have problems, and temporary replacement methods are as follows:
|
||||
# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
# (Option II: If not familiar with python) Use anaconda, the steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # create anaconda environment
|
||||
conda activate gptac_venv # activate anaconda environment
|
||||
python -m pip install -r requirements.txt # this step is the same as pip installation
|
||||
```
|
||||
|
||||
If you need to support Tsinghua ChatGLM, you need to install more dependencies (if you are not familiar with python or your computer configuration is not good, we recommend not to try):
|
||||
<details><summary>If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, click to expand</summary>
|
||||
<p>
|
||||
|
||||
[Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, you need to install more dependencies (prerequisites: familiar with Python + used Pytorch + computer configuration is strong enough):
|
||||
```sh
|
||||
# [Optional Step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM remarks: if you encounter the "Call ChatGLM fail cannot load ChatGLM parameters" error, refer to this: 1: The default installation above is torch + cpu version, to use cuda, you need to uninstall torch and reinstall torch + cuda; 2: If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code = True)
|
||||
python -m pip install -r request_llm/requirements_chatglm.txt
|
||||
|
||||
# [Optional Step II] Support Fudan MOSS
|
||||
python -m pip install -r request_llm/requirements_moss.txt
|
||||
git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the root directory of the project
|
||||
|
||||
# [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file includes the expected models. Currently supported models are as follows (the jittorllms series only supports the docker solution for the time being):
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
```
|
||||
|
||||
4. Run
|
||||
</p>
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
4. Run it
|
||||
```sh
|
||||
python main.py
|
||||
```5. Test Function Plugin
|
||||
```
|
||||
|
||||
5. Test function plugins
|
||||
```
|
||||
- Test Python project analysis
|
||||
In the input area, enter `./crazy_functions/test_project/python/dqn`, and then click "Analyze the entire Python project"
|
||||
- Test self-code interpretation
|
||||
Click "[Multithreading Demo] Interpretation of This Project Itself (Source Code Interpretation)"
|
||||
- Test experimental function template function (requires gpt to answer what happened today in history). You can use this function as a template to implement more complex functions.
|
||||
- Test function plugin template function (ask GPT what happened today in history), based on which you can implement more complex functions as a template
|
||||
Click "[Function Plugin Template Demo] Today in History"
|
||||
- There are more functions to choose from in the function plugin area drop-down menu.
|
||||
```
|
||||
|
||||
## Installation-Method 2: Use Docker (Linux)
|
||||
## Installation - Method 2: Using Docker
|
||||
|
||||
1. ChatGPT Only (Recommended for Most People)
|
||||
|
||||
1. ChatGPT only (recommended for most people)
|
||||
``` sh
|
||||
# download project
|
||||
git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
cd chatgpt_academic
|
||||
# configure overseas Proxy and OpenAI API KEY
|
||||
Edit config.py with any text editor
|
||||
# Install
|
||||
docker build -t gpt-academic .
|
||||
# Run
|
||||
git clone https://github.com/binary-husky/gpt_academic.git # Download project
|
||||
cd gpt_academic # Enter path
|
||||
nano config.py # Edit config.py with any text editor, configure "Proxy", "API_KEY" and "WEB_PORT" (e.g. 50923), etc.
|
||||
docker build -t gpt-academic . # Install
|
||||
|
||||
#(Last step - option 1) In a Linux environment, use `--net=host` for convenience and speed.
|
||||
docker run --rm -it --net=host gpt-academic
|
||||
|
||||
# Test function plug-in
|
||||
## Test function plugin template function (requires gpt to answer what happened today in history). You can use this function as a template to implement more complex functions.
|
||||
Click "[Function Plugin Template Demo] Today in History"
|
||||
## Test Abstract Writing for Latex Projects
|
||||
Enter ./crazy_functions/test_project/latex/attention in the input area, and then click "Read Tex Paper and Write Abstract"
|
||||
## Test Python Project Analysis
|
||||
Enter ./crazy_functions/test_project/python/dqn in the input area and click "Analyze the entire Python project."
|
||||
|
||||
More functions are available in the function plugin area drop-down menu.
|
||||
#(Last step - option 2) On macOS/windows environment, only -p option can be used to expose the container's port (e.g. 50923) to the port of the main machine.
|
||||
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
||||
```
|
||||
|
||||
2. ChatGPT+ChatGLM (requires strong familiarity with docker + strong computer configuration)
|
||||
2. ChatGPT + ChatGLM + MOSS (Requires Docker Knowledge)
|
||||
|
||||
``` sh
|
||||
# Modify dockerfile
|
||||
cd docs && nano Dockerfile+ChatGLM
|
||||
# How to build | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs)
|
||||
docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
|
||||
# How to run | 如何运行 (1) 直接运行:
|
||||
docker run --rm -it --net=host --gpus=all gpt-academic
|
||||
# How to run | 如何运行 (2) 我想运行之前进容器做一些调整:
|
||||
docker run --rm -it --net=host --gpus=all gpt-academic bash
|
||||
# Modify docker-compose.yml, delete Plan 1 and Plan 3, and keep Plan 2. Modify the configuration of Plan 2 in docker-compose.yml, refer to the comments in it for configuration.
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
3. ChatGPT + LLAMA + Pangu + RWKV (Requires Docker Knowledge)
|
||||
|
||||
## Installation-Method 3: Other Deployment Methods
|
||||
``` sh
|
||||
# Modify docker-compose.yml, delete Plan 1 and Plan 2, and keep Plan 3. Modify the configuration of Plan 3 in docker-compose.yml, refer to the comments in it for configuration.
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
1. Remote Cloud Server Deployment
|
||||
Please visit [Deployment Wiki-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
## Installation - Method 3: Other Deployment Options
|
||||
|
||||
2. Use WSL2 (Windows Subsystem for Linux)
|
||||
Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
1. How to Use Reverse Proxy URL/Microsoft Cloud Azure API
|
||||
Configure API_URL_REDIRECT according to the instructions in 'config.py'.
|
||||
|
||||
2. Deploy to a Remote Server (Requires Knowledge and Experience with Cloud Servers)
|
||||
Please visit [Deployment Wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
|
||||
## Installation-Proxy Configuration
|
||||
### Method 1: Conventional method
|
||||
[Configure Proxy](https://github.com/binary-husky/chatgpt_academic/issues/1)
|
||||
3. Using WSL2 (Windows Subsystem for Linux)
|
||||
Please visit [Deployment Wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
|
||||
### Method Two: Step-by-step tutorial for newcomers
|
||||
[Step-by-step tutorial for newcomers](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
|
||||
4. How to Run Under a Subdomain (e.g. `http://localhost/subpath`)
|
||||
Please visit [FastAPI Running Instructions](docs/WithFastapi.md)
|
||||
|
||||
5. Using docker-compose to Run
|
||||
Read the docker-compose.yml and follow the prompts.
|
||||
|
||||
---
|
||||
# Advanced Usage
|
||||
## Custom New Shortcut Buttons / Custom Function Plugins
|
||||
|
||||
## Customizing Convenient Buttons (Customizing Academic Shortcuts)
|
||||
Open `core_functional.py` with any text editor and add an item as follows, then restart the program (if the button has been successfully added and visible, both the prefix and suffix support hot modification without the need to restart the program to take effect). For example:
|
||||
1. Custom New Shortcut Buttons (Academic Hotkey)
|
||||
Open `core_functional.py` with any text editor, add an entry as follows and restart the program. (If the button has been successfully added and is visible, the prefix and suffix can be hot-modified without having to restart the program.)
|
||||
For example,
|
||||
```
|
||||
"Super English to Chinese translation": {
|
||||
# Prefix, which will be added before your input. For example, to describe your requirements, such as translation, code interpretation, polishing, etc.
|
||||
"Prefix": "Please translate the following content into Chinese and use a markdown table to interpret the proprietary terms in the text one by one:\n\n",
|
||||
"Super English-to-Chinese": {
|
||||
# Prefix, which will be added before your input. For example, used to describe your requests, such as translation, code explanation, polishing, etc.
|
||||
"Prefix": "Please translate the following content into Chinese and then use a markdown table to explain the proprietary terms that appear in the text:\n\n",
|
||||
|
||||
# Suffix, which will be added after your input. For example, combined with the prefix, you can put your input content in quotes.
|
||||
# Suffix, which is added after your input. For example, with the prefix, your input content can be surrounded by quotes.
|
||||
"Suffix": "",
|
||||
},
|
||||
```
|
||||
@@ -207,85 +198,125 @@ Open `core_functional.py` with any text editor and add an item as follows, then
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. Custom Function Plugins
|
||||
|
||||
Write powerful function plugins to perform any task you can think of, even those you cannot think of.
|
||||
The difficulty of plugin writing and debugging in this project is very low. As long as you have a certain knowledge of Python, you can implement your own plug-in functions based on the template we provide.
|
||||
For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
||||
|
||||
---
|
||||
|
||||
|
||||
## Some Function Displays
|
||||
|
||||
### Image Display:
|
||||
|
||||
|
||||
You are a professional academic paper translator.
|
||||
# Latest Update
|
||||
## New Feature Dynamics
|
||||
1. Conversation saving function. Call `Save current conversation` in the function plugin area to save the current conversation as a readable and recoverable HTML file. In addition, call `Load conversation history archive` in the function plugin area (dropdown menu) to restore previous sessions. Tip: Clicking `Load conversation history archive` without specifying a file will display the cached history of HTML archives, and clicking `Delete all local conversation history` will delete all HTML archive caches.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500" >
|
||||
</div>
|
||||
|
||||
### If a program can understand and analyze itself:
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
|
||||
</div>
|
||||
2. Report generation. Most plugins will generate work reports after execution.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
|
||||
</div>
|
||||
|
||||
### Analysis of any Python/Cpp project:
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
|
||||
</div>
|
||||
|
||||
### One-click reading comprehension and summary generation of Latex papers
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
|
||||
</div>
|
||||
|
||||
### Automatic report generation
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
|
||||
</div>
|
||||
|
||||
### Modular functional design
|
||||
|
||||
3. Modular function design with simple interfaces that support powerful functions.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
|
||||
</div>
|
||||
|
||||
### Source code translation to English
|
||||
|
||||
4. This is an open-source project that can "self-translate".
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="500" >
|
||||
</div>
|
||||
|
||||
## Todo and version planning:
|
||||
- version 3.2+ (todo): Function plugin supports more parameter interfaces
|
||||
- version 3.1: Support for inquiring multiple GPT models at the same time! Support for api2d, support for multiple apikeys load balancing
|
||||
- version 3.0: Support for chatglm and other small llms
|
||||
- version 2.6: Refactored the plugin structure, improved interactivity, added more plugins
|
||||
- version 2.5: Self-updating, solves the problem of text being too long and token overflowing when summarizing large project source code
|
||||
- version 2.4: (1) Added PDF full text translation function; (2) Added function to switch input area position; (3) Added vertical layout option; (4) Multi-threaded function plugin optimization.
|
||||
- version 2.3: Enhanced multi-threaded interactivity
|
||||
- version 2.2: Function plugin supports hot reloading
|
||||
- version 2.1: Foldable layout
|
||||
- version 2.0: Introduction of modular function plugins
|
||||
- version 1.0: Basic functions
|
||||
5. Translating other open-source projects is a piece of cake.
|
||||
|
||||
## Reference and learning
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="500" >
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="500" >
|
||||
</div>
|
||||
|
||||
6. A small feature decorated with [live2d](https://github.com/fghrsh/live2d_demo) (disabled by default, need to modify `config.py`).
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500" >
|
||||
</div>
|
||||
|
||||
7. Added MOSS large language model support.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500" >
|
||||
</div>
|
||||
|
||||
8. OpenAI image generation.
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
|
||||
</div>
|
||||
|
||||
9. OpenAI audio parsing and summarization.
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
|
||||
</div>
|
||||
|
||||
10. Full-text proofreading and error correction of LaTeX.
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" width="500" >
|
||||
</div>
|
||||
|
||||
|
||||
## Versions:
|
||||
- version 3.5(Todo): Use natural language to call all function plugins of this project (high priority).
|
||||
- version 3.4(Todo): Improve multi-threading support for chatglm local large models.
|
||||
- version 3.3: +Internet information integration function.
|
||||
- version 3.2: Function plugin supports more parameter interfaces (save conversation function, interpretation of any language code + simultaneous inquiry of any LLM combination).
|
||||
- version 3.1: Support simultaneous inquiry of multiple GPT models! Support api2d, and support load balancing of multiple apikeys.
|
||||
- version 3.0: Support chatglm and other small LLM models.
|
||||
- version 2.6: Refactored plugin structure, improved interactivity, and added more plugins.
|
||||
- version 2.5: Self-updating, solving the problem of text overflow and token overflow when summarizing large engineering source codes.
|
||||
- version 2.4: (1) Added PDF full-text translation function; (2) Added the function of switching the position of the input area; (3) Added vertical layout option; (4) Optimized multi-threading function plugins.
|
||||
- version 2.3: Enhanced multi-threading interactivity.
|
||||
- version 2.2: Function plugin supports hot reloading.
|
||||
- version 2.1: Collapsible layout.
|
||||
- version 2.0: Introduction of modular function plugins.
|
||||
- version 1.0: Basic functions.
|
||||
|
||||
gpt_academic Developer QQ Group-2: 610599535
|
||||
|
||||
- Known Issues
|
||||
- Some browser translation plugins interfere with the front-end operation of this software.
|
||||
- Both high and low versions of gradio can lead to various exceptions.
|
||||
|
||||
## Reference and Learning
|
||||
|
||||
```
|
||||
The code design of this project has referenced many other excellent projects, including:
|
||||
Many other excellent designs have been referenced in the code, mainly including:
|
||||
|
||||
# Reference project 1: Borrowed many tips from ChuanhuChatGPT
|
||||
# Project 1: THU ChatGLM-6B:
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
|
||||
# Project 2: THU JittorLLMs:
|
||||
https://github.com/Jittor/JittorLLMs
|
||||
|
||||
# Project 3: Edge-GPT:
|
||||
https://github.com/acheong08/EdgeGPT
|
||||
|
||||
# Project 4: ChuanhuChatGPT:
|
||||
https://github.com/GaiZhenbiao/ChuanhuChatGPT
|
||||
|
||||
# Reference project 2: Tsinghua ChatGLM-6B:
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
```
|
||||
# Project 5: ChatPaper:
|
||||
https://github.com/kaixindelele/ChatPaper
|
||||
|
||||
# More:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
```
|
||||
@@ -2,295 +2,322 @@
|
||||
>
|
||||
> Ce fichier README est généré automatiquement par le plugin de traduction markdown de ce projet et n'est peut - être pas correct à 100%.
|
||||
>
|
||||
> During installation, please strictly select the versions **specified** in requirements.txt.
|
||||
>
|
||||
> `pip install -r requirements.txt`
|
||||
>
|
||||
|
||||
# <img src="logo.png" width="40" > ChatGPT Optimisation Académique
|
||||
# <img src="logo.png" width="40" > Optimisation académique GPT (GPT Academic)
|
||||
|
||||
**Si vous aimez ce projet, donnez-lui une étoile; si vous avez inventé des raccourcis académiques plus utiles ou des plugins fonctionnels, n'hésitez pas à ouvrir une demande ou une demande de traction. Nous avons également un fichier README en [anglais|](docs/README_EN.md)[japonais|](docs/README_JP.md)[russe|](docs/README_RS.md)[français](docs/README_FR.md) traduit par ce projet lui-même.**
|
||||
**Si vous aimez ce projet, veuillez lui donner une étoile. Si vous avez trouvé des raccourcis académiques ou des plugins fonctionnels plus utiles, n'hésitez pas à ouvrir une demande ou une pull request.
|
||||
Pour traduire ce projet dans une langue arbitraire avec GPT, lisez et exécutez [`multi_language.py`](multi_language.py) (expérimental).
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> 1. Veuillez noter que seuls les plugins de fonction signalés en **rouge** sont capables de lire les fichiers, certains plugins se trouvent dans le **menu déroulant** de la section plugin. Nous sommes également les bienvenus avec la plus haute priorité pour traiter et accepter tout nouveau PR de plugin!
|
||||
> 1. Veuillez noter que seuls les plugins de fonctions (boutons) **en rouge** prennent en charge la lecture de fichiers. Certains plugins se trouvent dans le **menu déroulant** de la zone de plugins. De plus, nous accueillons et traitons les nouvelles pull requests pour les plugins avec **la plus haute priorité**!
|
||||
>
|
||||
> 2. Chaque fichier dans ce projet est expliqué en détail dans l'auto-analyse [self_analysis.md](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Avec l'itération des versions, vous pouvez également cliquer sur les plugins fonctionnels pertinents pour appeler GPT et générer un rapport d'auto-analyse projet mis à jour. Les questions fréquemment posées sont résumées dans le [wiki](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98).
|
||||
> 2. Les fonctions de chaque fichier de ce projet sont expliquées en détail dans l'auto-analyse [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Avec l'itération des versions, vous pouvez également cliquer sur les plugins de fonctions pertinents et appeler GPT pour régénérer le rapport d'auto-analyse du projet à tout moment. Les FAQ sont résumées dans [le wiki](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Méthode d'installation](#installation).
|
||||
>
|
||||
> 3. Ce projet est compatible avec et encourage l'utilisation de grands modèles de langage nationaux tels que chatglm, RWKV, Pangu, etc. La coexistence de plusieurs clés API est prise en charge et peut être remplie dans le fichier de configuration, tel que `API_KEY="openai-key1,openai-key2,api2d-key3"`. Lorsque vous souhaitez remplacer temporairement `API_KEY`, saisissez temporairement `API_KEY` dans la zone de saisie, puis appuyez sur Entrée pour soumettre et activer.
|
||||
|
||||
<div align="center">
|
||||
|
||||
Fonctionnalité | Description
|
||||
Functionnalité | Description
|
||||
--- | ---
|
||||
Polissage en un clic | Prend en charge la correction en un clic et la recherche d'erreurs de syntaxe dans les documents de recherche.
|
||||
Traduction Chinois-Anglais en un clic | Une touche pour traduire la partie chinoise en anglais ou celle anglaise en chinois.
|
||||
Explication de code en un clic | Affiche et explique correctement le code.
|
||||
[Raccourcis clavier personnalisables](https://www.bilibili.com/video/BV14s4y1E7jN) | Prend en charge les raccourcis clavier personnalisables.
|
||||
[Configuration du serveur proxy](https://www.bilibili.com/video/BV1rc411W7Dr) | Prend en charge la configuration du serveur proxy.
|
||||
Conception modulaire | Prend en charge la personnalisation des plugins de fonctions et des [plugins] de fonctions hiérarchiques personnalisés, et les plugins prennent en charge [la mise à jour à chaud](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
||||
[Auto-analyse du programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugins] [Lire en un clic](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) le code source de ce projet.
|
||||
[Analyse de programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugins] En un clic, les projets Python/C/C++/Java/Lua/... peuvent être analysés.
|
||||
Lire le document de recherche | [Plugins] Lisez le résumé de l'article en latex et générer un résumé.
|
||||
Traduction et polissage de l'article complet en LaTeX | [Plugins] Une touche pour traduire ou corriger en LaTeX
|
||||
Génération Commentaire de fonction en vrac | [Plugins] Lisez en un clic les fonctions et générez des commentaires de fonction.
|
||||
Rapport d'analyse automatique des chats générés | [Plugins] Génère un rapport de synthèse après l'exécution.
|
||||
[Assistant arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plugins] Entrez l'url de l'article arxiv pour traduire le résumé + télécharger le PDF en un clic
|
||||
[Traduction complète des articles PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugins] Extraire le titre et le résumé de l'article PDF + Traduire le texte entier (multithread)
|
||||
[Aide à la recherche Google Academ](https://www.bilibili.com/video/BV19L411U7ia) | [Plugins] Donnez à GPT l'URL de n'importe quelle page de recherche Google Academ pour vous aider à sélectionner des articles intéressants
|
||||
Affichage de formules/images/tableaux | Afficher la forme traduite et rendue d'une formule en même temps, plusieurs formules et surlignage du code prend en charge
|
||||
Prise en charge des plugins multithread | Prise en charge de l'appel multithread de chatgpt, traitement en masse de texte ou de programmes en un clic
|
||||
Activer le thème Gradio sombre [theme](https://github.com/binary-husky/chatgpt_academic/issues/173) au démarrage | Ajoutez ```/?__dark-theme=true``` à l'URL du navigateur pour basculer vers le thème sombre
|
||||
[Prise en charge de plusieurs modèles LLM](https://www.bilibili.com/video/BV1wT411p7yf), [prise en charge de l'interface API2D](https://api2d.com/) | Comment cela serait-il de se faire servir par GPT3.5, GPT4 et la [ChatGLM de Tsinghua](https://github.com/THUDM/ChatGLM-6B) en même temps?
|
||||
Expérience en ligne d'huggingface sans science | Après vous être connecté à huggingface, copiez [cet espace](https://huggingface.co/spaces/qingxu98/gpt-academic)
|
||||
... | ...
|
||||
Révision en un clic | prend en charge la révision en un clic et la recherche d'erreurs de syntaxe dans les articles
|
||||
Traduction chinois-anglais en un clic | Traduction chinois-anglais en un clic
|
||||
Explication de code en un clic | Affichage, explication, génération et ajout de commentaires de code
|
||||
[Raccourcis personnalisés](https://www.bilibili.com/video/BV14s4y1E7jN) | prend en charge les raccourcis personnalisés
|
||||
Conception modulaire | prend en charge de puissants plugins de fonction personnalisée, les plugins prennent en charge la [mise à jour à chaud](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
[Autoscanner](https://www.bilibili.com/video/BV1cj411A7VW) | [Plug-in de fonction] [Compréhension instantanée](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) du code source de ce projet
|
||||
[Analyse de programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plug-in de fonction] Analyse en un clic de la structure d'autres projets Python / C / C ++ / Java / Lua / ...
|
||||
Lecture d'articles, [traduction](https://www.bilibili.com/video/BV1KT411x7Wn) d'articles | [Plug-in de fonction] Compréhension instantanée de l'article latex / pdf complet et génération de résumés
|
||||
[Traduction](https://www.bilibili.com/video/BV1nk4y1Y7Js/) et [révision](https://www.bilibili.com/video/BV1FT411H7c5/) complets en latex | [Plug-in de fonction] traduction ou révision en un clic d'articles en latex
|
||||
Génération de commentaires en masse | [Plug-in de fonction] Génération en un clic de commentaires de fonction en masse
|
||||
Traduction [chinois-anglais](https://www.bilibili.com/video/BV1yo4y157jV/) en Markdown | [Plug-in de fonction] avez-vous vu la [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) pour les 5 langues ci-dessus?
|
||||
Génération de rapports d'analyse de chat | [Plug-in de fonction] Génère automatiquement un rapport de résumé après l'exécution
|
||||
[Traduction intégrale en pdf](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plug-in de fonction] Extraction de titre et de résumé de l'article pdf + traduction intégrale (multi-thread)
|
||||
[Aide à arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plug-in de fonction] Entrer l'url de l'article arxiv pour traduire et télécharger le résumé en un clic
|
||||
[Aide à la recherche Google Scholar](https://www.bilibili.com/video/BV19L411U7ia) | [Plug-in de fonction] Donnez l'URL de la page de recherche Google Scholar, laissez GPT vous aider à [écrire des ouvrages connexes](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||
Aggrégation d'informations en ligne et GPT | [Plug-in de fonction] Permet à GPT de [récupérer des informations en ligne](https://www.bilibili.com/video/BV1om4y127ck), puis de répondre aux questions, afin que les informations ne soient jamais obsolètes
|
||||
Affichage d'équations / images / tableaux | Fournit un affichage simultané de [la forme tex et de la forme rendue](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), prend en charge les formules mathématiques et la coloration syntaxique du code
|
||||
Prise en charge des plugins à plusieurs threads | prend en charge l'appel multithread de chatgpt, un clic pour traiter [un grand nombre d'articles](https://www.bilibili.com/video/BV1FT411H7c5/) ou de programmes
|
||||
Thème gradio sombre en option de démarrage | Ajoutez```/?__theme=dark``` à la fin de l'URL du navigateur pour basculer vers le thème sombre
|
||||
[Prise en charge de plusieurs modèles LLM](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) | Sera probablement très agréable d'être servi simultanément par GPT3.5, GPT4, [ChatGLM de Tsinghua](https://github.com/THUDM/ChatGLM-6B), [MOSS de Fudan](https://github.com/OpenLMLab/MOSS)
|
||||
Plus de modèles LLM, déploiement de [huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Ajout prise en charge de l'interface Newbing (nouvelle bing), introduction du support de [Jittorllms de Tsinghua](https://github.com/Jittor/JittorLLMs), [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) et [Panguα](https://openi.org.cn/pangu/)
|
||||
Plus de nouvelles fonctionnalités (génération d'images, etc.) ... | Voir la fin de ce document pour plus de détails ...
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
Vous êtes un traducteur professionnel d'articles universitaires en français.
|
||||
|
||||
Ceci est un fichier Markdown, veuillez le traduire en français sans modifier les commandes Markdown existantes :
|
||||
|
||||
- Nouvelle interface (modifiable en modifiant l'option de mise en page dans config.py pour basculer entre les mises en page gauche-droite et haut-bas)
|
||||
- Nouvelle interface (modifier l'option LAYOUT de `config.py` pour passer d'une disposition ``gauche-droite`` à une disposition ``haut-bas``)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
|
||||
</div>
|
||||
|
||||
|
||||
- Tous les boutons sont générés dynamiquement en lisant functional.py, les utilisateurs peuvent ajouter librement des fonctions personnalisées pour libérer le presse-papiers.
|
||||
</div>- Tous les boutons sont générés dynamiquement en lisant functional.py et peuvent être facilement personnalisés pour ajouter des fonctionnalités personnalisées, ce qui facilite l'utilisation du presse-papiers.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- Correction/amélioration
|
||||
- Correction d'erreurs/lissage du texte.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- Si la sortie contient des formules, elles seront affichées simultanément sous forme de de texte brut et de forme rendue pour faciliter la copie et la lecture.
|
||||
- Si la sortie contient des équations, elles sont affichées à la fois sous forme de tex et sous forme rendue pour faciliter la lecture et la copie.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
|
||||
</div>
|
||||
|
||||
- Pas envie de lire le code du projet ? Faites votre propre démo avec ChatGPT.
|
||||
- Pas envie de lire les codes de ce projet? Tout le projet est directement exposé par ChatGPT.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
|
||||
</div>
|
||||
|
||||
- Utilisation combinée de plusieurs modèles de langage sophistiqués (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
|
||||
- Appel à une variété de modèles de langage de grande envergure (ChatGLM + OpenAI-GPT3.5 + [API2D] (https://api2d.com/)-GPT4).
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
||||
</div>
|
||||
|
||||
Utilisation combinée de plusieurs modèles de langage sophistiqués en version de test [huggingface](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (la version huggingface ne prend pas en charge Chatglm).
|
||||
|
||||
|
||||
---
|
||||
# Installation
|
||||
## Installation-Method 1: running directly (Windows, Linux or MacOS)
|
||||
|
||||
## Installation - Méthode 1 : Exécution directe (Windows, Linux or MacOS)
|
||||
|
||||
1. Téléchargez le projet
|
||||
1. Télécharger le projet
|
||||
```sh
|
||||
git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
cd chatgpt_academic
|
||||
git clone https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
```
|
||||
|
||||
2. Configuration de l'API_KEY et des paramètres de proxy
|
||||
2. Configuration de la clé API
|
||||
|
||||
Dans `config.py`, configurez les paramètres de proxy et de clé d'API OpenAI, comme indiqué ci-dessous
|
||||
```
|
||||
1. Si vous êtes en Chine, vous devez configurer un proxy étranger pour utiliser l'API OpenAI en toute transparence. Pour ce faire, veuillez lire attentivement le fichier config.py (1. Modifiez l'option USE_PROXY ; 2. Modifiez les paramètres de proxies comme indiqué dans les instructions).
|
||||
2. Configurez votre clé API OpenAI. Vous devez vous inscrire sur le site web d'OpenAI pour obtenir une clé API. Une fois que vous avez votre clé API, vous pouvez la configurer dans le fichier config.py.
|
||||
3. Tous les problèmes liés aux réseaux de proxy (temps d'attente, non-fonctionnement des proxies) sont résumés dans https://github.com/binary-husky/chatgpt_academic/issues/1.
|
||||
```
|
||||
(Remarque : le programme vérifie d'abord s'il existe un fichier de configuration privé nommé `config_private.py`, et utilise les configurations de celui-ci à la place de celles du fichier `config.py`. Par conséquent, si vous comprenez notre logique de lecture de configuration, nous vous recommandons fortement de créer un nouveau fichier de configuration nommé `config_private.py` à côté de `config.py` et de transférer (copier) les configurations de celui-ci dans `config_private.py`. `config_private.py` n'est pas contrôlé par git et rend vos informations personnelles plus sûres.)
|
||||
Dans `config.py`, configurez la clé API et d'autres paramètres. Consultez [Special network environment settings] (https://github.com/binary-husky/gpt_academic/issues/1).
|
||||
|
||||
3. Installation des dépendances
|
||||
(P.S. Lorsque le programme est exécuté, il vérifie en premier s'il existe un fichier de configuration privé nommé `config_private.py` et remplace les paramètres portant le même nom dans `config.py` par les paramètres correspondants dans `config_private.py`. Par conséquent, si vous comprenez la logique de lecture de nos configurations, nous vous recommandons vivement de créer un nouveau fichier de configuration nommé `config_private.py` à côté de `config.py` et de transférer (copier) les configurations de `config.py`. `config_private.py` n'est pas contrôlé par Git et peut garantir la sécurité de vos informations privées. P.S. Le projet prend également en charge la configuration de la plupart des options via "variables d'environnement", le format d'écriture des variables d'environnement est référencé dans le fichier `docker-compose`. Priorité de lecture: "variables d'environnement" > `config_private.py` > `config.py`)
|
||||
|
||||
|
||||
3. Installer les dépendances
|
||||
```sh
|
||||
# (Option 1) Recommandé
|
||||
# (Option I: python users instalation) (Python version 3.9 or higher, the newer the better). Note: use official pip source or ali pip source. To temporarily change the source: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
# (Option 2) Si vous utilisez anaconda, les étapes sont similaires :
|
||||
# (Option 2.1) conda create -n gptac_venv python=3.11
|
||||
# (Option 2.2) conda activate gptac_venv
|
||||
# (Option 2.3) python -m pip install -r requirements.txt
|
||||
|
||||
# note : Utilisez la source pip officielle ou la source pip Alibaba. D'autres sources (comme celles des universités) pourraient poser problème. Pour utiliser temporairement une autre source, utilisez :
|
||||
# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
# (Option II: non-python users instalation) Use Anaconda, the steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # Create anaconda env
|
||||
conda activate gptac_venv # Activate anaconda env
|
||||
python -m pip install -r requirements.txt # Same step as pip instalation
|
||||
```
|
||||
|
||||
Si vous avez besoin de soutenir ChatGLM de Tsinghua, vous devez installer plus de dépendances (si vous n'êtes pas familier avec Python ou que votre ordinateur n'est pas assez performant, nous vous recommandons de ne pas essayer) :
|
||||
<details><summary>Cliquez ici pour afficher le texte si vous souhaitez prendre en charge THU ChatGLM/FDU MOSS en tant que backend.</summary>
|
||||
<p>
|
||||
|
||||
【Optional】 Si vous souhaitez prendre en charge THU ChatGLM/FDU MOSS en tant que backend, des dépendances supplémentaires doivent être installées (prérequis: compétent en Python + utilisez Pytorch + configuration suffisante de l'ordinateur):
|
||||
```sh
|
||||
# 【Optional Step I】 Support THU ChatGLM. Remarque sur THU ChatGLM: Si vous rencontrez l'erreur "Appel à ChatGLM échoué, les paramètres ChatGLM ne peuvent pas être chargés normalement", reportez-vous à ce qui suit: 1: La version par défaut installée est torch+cpu, si vous souhaitez utiliser cuda, vous devez désinstaller torch et réinstaller torch+cuda; 2: Si le modèle ne peut pas être chargé en raison d'une configuration insuffisante de l'ordinateur local, vous pouvez modifier la précision du modèle dans request_llm/bridge_chatglm.py, modifier AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) par AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
||||
python -m pip install -r request_llm/requirements_chatglm.txt
|
||||
|
||||
# 【Optional Step II】 Support FDU MOSS
|
||||
python -m pip install -r request_llm/requirements_moss.txt
|
||||
git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note: When running this line of code, you must be in the project root path.
|
||||
|
||||
# 【Optional Step III】Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the desired model. Currently, all models supported are as follows (the jittorllms series currently only supports the docker scheme):
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
```
|
||||
|
||||
</p>
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
4. Exécution
|
||||
```sh
|
||||
python main.py
|
||||
```5. Plugin de fonction de test
|
||||
```
|
||||
- Fonction de modèle de plugin de test (requiert que GPT réponde à ce qui s'est passé dans l'histoire aujourd'hui), vous pouvez utiliser cette fonction comme modèle pour mettre en œuvre des fonctionnalités plus complexes.
|
||||
Cliquez sur "[Démo de modèle de plugin de fonction] Aujourd'hui dans l'histoire"
|
||||
```
|
||||
|
||||
5. Tester les plugins de fonctions
|
||||
```
|
||||
- Test Python Project Analysis
|
||||
Dans la zone de saisie, entrez `./crazy_functions/test_project/python/dqn`, puis cliquez sur "Parse Entire Python Project"
|
||||
- Test d'auto-lecture du code
|
||||
Cliquez sur "[Démo multi-thread] Parser ce projet lui-même (auto-traduction de la source)"
|
||||
- Test du modèle de fonctionnalité expérimentale (exige une réponse de l'IA à ce qui est arrivé aujourd'hui dans l'histoire). Vous pouvez utiliser cette fonctionnalité comme modèle pour des fonctions plus complexes.
|
||||
Cliquez sur "[Démo modèle de plugin de fonction] Histoire du Jour"
|
||||
- Le menu déroulant de la zone de plugin de fonctionnalité contient plus de fonctionnalités à sélectionner.
|
||||
```
|
||||
## Installation - Méthode 2: Utilisation de Docker
|
||||
|
||||
## Installation - Méthode 2 : Utilisation de docker (Linux)
|
||||
1. ChatGPT uniquement (recommandé pour la plupart des gens)
|
||||
|
||||
|
||||
Vous êtes un traducteur professionnel d'articles académiques en français.
|
||||
|
||||
1. ChatGPT seul (recommandé pour la plupart des gens)
|
||||
``` sh
|
||||
# Télécharger le projet
|
||||
git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
cd chatgpt_academic
|
||||
# Configurer le proxy outre-mer et la clé API OpenAI
|
||||
Modifier le fichier config.py avec n'importe quel éditeur de texte
|
||||
# Installer
|
||||
docker build -t gpt-academic .
|
||||
# Exécuter
|
||||
git clone https://github.com/binary-husky/gpt_academic.git # Télécharger le projet
|
||||
cd gpt_academic # Accéder au chemin
|
||||
nano config.py # Editez config.py avec n'importe quel éditeur de texte en configurant "Proxy", "API_KEY" et "WEB_PORT" (p. ex. 50923)
|
||||
docker build -t gpt-academic . # Installer
|
||||
|
||||
# (Dernière étape - choix1) Dans un environnement Linux, l'utilisation de `--net=host` est plus facile et rapide
|
||||
docker run --rm -it --net=host gpt-academic
|
||||
|
||||
# Tester les modules de fonction
|
||||
## Tester la fonction modèle des modules (requiert la réponse de GPT à "qu'est-ce qui s'est passé dans l'histoire aujourd'hui ?"), vous pouvez utiliser cette fonction en tant que modèle pour implémenter des fonctions plus complexes.
|
||||
Cliquez sur "[Exemple de modèle de module] Histoire d'aujourd'hui"
|
||||
## Tester le résumé écrit pour le projet LaTeX
|
||||
Dans la zone de saisie, tapez ./crazy_functions/test_project/latex/attention, puis cliquez sur "Lire le résumé de l'article de recherche LaTeX"
|
||||
## Tester l'analyse du projet Python
|
||||
Dans la zone de saisie, tapez ./crazy_functions/test_project/python/dqn, puis cliquez sur "Analyser l'ensemble du projet Python"
|
||||
|
||||
D'autres fonctions sont disponibles dans la liste déroulante des modules de fonction.
|
||||
# (Dernière étape - choix 2) Dans un environnement macOS/Windows, seule l'option -p permet d'exposer le port du récipient (p.ex. 50923) au port de l'hôte.
|
||||
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
||||
```
|
||||
|
||||
2. ChatGPT+ChatGLM (nécessite une grande connaissance de docker et une configuration informatique suffisamment puissante)
|
||||
2. ChatGPT + ChatGLM + MOSS (il faut connaître Docker)
|
||||
|
||||
``` sh
|
||||
# Modifier le dockerfile
|
||||
cd docs && nano Dockerfile+ChatGLM
|
||||
# Comment construire | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs)
|
||||
docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
|
||||
# Comment exécuter | 如何运行 (1) Directement exécuter :
|
||||
docker run --rm -it --net=host --gpus=all gpt-academic
|
||||
# Comment exécuter | 如何运行 (2) Je veux effectuer quelques ajustements dans le conteneur avant de lancer :
|
||||
docker run --rm -it --net=host --gpus=all gpt-academic bash
|
||||
# Modifiez docker-compose.yml, supprimez la solution 1 et la solution 3, conservez la solution 2. Modifiez la configuration de la solution 2 dans docker-compose.yml en suivant les commentaires.
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
## Installation - Méthode 3 : Autres méthodes de déploiement
|
||||
|
||||
1. Déploiement sur un cloud serveur distant
|
||||
Veuillez consulter le [wiki de déploiement-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
|
||||
2. Utilisation de WSL2 (Windows Subsystem for Linux)
|
||||
Veuillez consulter le [wiki de déploiement-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
|
||||
|
||||
## Configuration de la procuration de l'installation
|
||||
### Méthode 1 : Méthode conventionnelle
|
||||
[Configuration de la procuration](https://github.com/binary-husky/chatgpt_academic/issues/1)
|
||||
|
||||
### Méthode 2 : Tutoriel pour débutant pur
|
||||
[Tutoriel pour débutant pur](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Personnalisation des nouveaux boutons pratiques (personnalisation des raccourcis académiques)
|
||||
Ouvrez le fichier `core_functional.py` avec n'importe quel éditeur de texte, ajoutez les éléments suivants, puis redémarrez le programme. (Si le bouton a déjà été ajouté avec succès et est visible, le préfixe et le suffixe pris en charge peuvent être modifiés à chaud sans avoir besoin de redémarrer le programme.)
|
||||
Par exemple:
|
||||
3. ChatGPT + LLAMA + PanGu + RWKV (il faut connaître Docker)
|
||||
``` sh
|
||||
# Modifiez docker-compose.yml, supprimez la solution 1 et la solution 2, conservez la solution 3. Modifiez la configuration de la solution 3 dans docker-compose.yml en suivant les commentaires.
|
||||
docker-compose up
|
||||
```
|
||||
"Traduction Français-Chinois": {
|
||||
# Préfixe, qui sera ajouté avant votre saisie. Par exemple, pour décrire votre demande, telle que la traduction, le débogage de code, l'amélioration, etc.
|
||||
"Prefix": "Veuillez traduire le contenu ci-dessous en chinois, puis expliquer chaque terme propre mentionné dans un tableau Markdown :\n\n",
|
||||
|
||||
# Suffixe, qui sera ajouté après votre saisie. Par exemple, en combinaison avec un préfixe, vous pouvez mettre le contenu de votre saisie entre guillemets.
|
||||
|
||||
## Installation - Méthode 3: Autres méthodes de déploiement
|
||||
|
||||
1. Comment utiliser une URL de proxy inversé / Microsoft Azure Cloud API
|
||||
Configurez simplement API_URL_REDIRECT selon les instructions de config.py.
|
||||
|
||||
2. Déploiement distant sur un serveur cloud (connaissance et expérience des serveurs cloud requises)
|
||||
Veuillez consulter [Wiki de déploiement-1] (https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97).
|
||||
|
||||
3. Utilisation de WSL2 (sous-système Windows pour Linux)
|
||||
Veuillez consulter [Wiki de déploiement-2] (https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2).
|
||||
|
||||
4. Comment exécuter sous un sous-répertoire (tel que `http://localhost/subpath`)
|
||||
Veuillez consulter les [instructions d'exécution de FastAPI] (docs/WithFastapi.md).
|
||||
|
||||
5. Utilisation de docker-compose
|
||||
Veuillez lire docker-compose.yml, puis suivre les instructions fournies.
|
||||
|
||||
# Utilisation avancée
|
||||
## Personnalisation de nouveaux boutons pratiques / Plugins de fonctions personnalisées
|
||||
|
||||
1. Personnalisation de nouveaux boutons pratiques (raccourcis académiques)
|
||||
Ouvrez core_functional.py avec n'importe quel éditeur de texte, ajoutez une entrée comme suit, puis redémarrez le programme. (Si le bouton a été ajouté avec succès et est visible, le préfixe et le suffixe prennent en charge les modifications à chaud et ne nécessitent pas le redémarrage du programme pour prendre effet.)
|
||||
Par exemple
|
||||
```
|
||||
"Super coller sens": {
|
||||
# Préfixe, sera ajouté avant votre entrée. Par exemple, pour décrire votre demande, telle que traduire, expliquer du code, faire la mise en forme, etc.
|
||||
"Prefix": "Veuillez traduire le contenu suivant en chinois, puis expliquer chaque terme proprement nommé qui y apparaît avec un tableau markdown:\n\n",
|
||||
|
||||
# Suffixe, sera ajouté après votre entrée. Par exemple, en utilisant le préfixe, vous pouvez entourer votre contenu d'entrée de guillemets.
|
||||
"Suffix": "",
|
||||
},
|
||||
```
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. Plugins de fonctions personnalisées
|
||||
|
||||
Écrivez des plugins de fonctions puissants pour effectuer toutes les tâches que vous souhaitez ou que vous ne pouvez pas imaginer.
|
||||
Les plugins de ce projet ont une difficulté de programmation et de débogage très faible. Si vous avez des connaissances de base en Python, vous pouvez simuler la fonctionnalité de votre propre plugin en suivant le modèle que nous avons fourni.
|
||||
Veuillez consulter le [Guide du plugin de fonction] (https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) pour plus de détails.
|
||||
|
||||
---
|
||||
# Latest Update
|
||||
|
||||
## Nouvelles fonctionnalités en cours de déploiement.
|
||||
|
||||
## Présentation de certaines fonctionnalités
|
||||
|
||||
### Affichage des images:
|
||||
1. Fonction de sauvegarde de la conversation.
|
||||
Appelez simplement "Enregistrer la conversation actuelle" dans la zone de plugin de fonction pour enregistrer la conversation actuelle en tant que fichier html lisible et récupérable. De plus, dans la zone de plugin de fonction (menu déroulant), appelez "Charger une archive de l'historique de la conversation" pour restaurer la conversation précédente. Astuce : cliquer directement sur "Charger une archive de l'historique de la conversation" sans spécifier de fichier permet de consulter le cache d'archive html précédent. Cliquez sur "Supprimer tous les enregistrements locaux de l'historique de la conversation" pour supprimer le cache d'archive html.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500" >
|
||||
</div>
|
||||
|
||||
|
||||
### Si un programme peut comprendre et décomposer lui-même :
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
|
||||
</div>
|
||||
|
||||
|
||||
### Analyse de tout projet Python/Cpp quelconque :
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
|
||||
</div>
|
||||
|
||||
### Lecture et résumé générés automatiquement pour les articles en Latex
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
|
||||
</div>
|
||||
|
||||
### Génération de rapports automatique
|
||||
2. Générer un rapport. La plupart des plugins génèrent un rapport de travail après l'exécution.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
|
||||
</div>
|
||||
|
||||
### Conception de fonctionnalités modulaires
|
||||
3. Conception de fonctionnalités modulaires avec une interface simple mais capable d'une fonctionnalité puissante.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
|
||||
</div>
|
||||
|
||||
|
||||
### Traduction de code source en anglais
|
||||
|
||||
4. C'est un projet open source qui peut "se traduire de lui-même".
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="500" >
|
||||
</div>
|
||||
|
||||
## À faire et planification de version :
|
||||
- version 3.2+ (à faire) : Prise en charge de plus de paramètres d'interface de plugin de fonction
|
||||
- version 3.1 : Prise en charge de l'interrogation simultanée de plusieurs modèles GPT ! Prise en charge de l'API2d, prise en charge de la répartition de charge de plusieurs clés API
|
||||
- version 3.0 : Prise en charge de chatglm et d'autres petits llm
|
||||
- version 2.6 : Réorganisation de la structure du plugin, amélioration de l'interactivité, ajout de plus de plugins
|
||||
- version 2.5 : Mise à jour automatique, résolution du problème de dépassement de jeton et de texte trop long lors de la compilation du code source complet
|
||||
- version 2.4 : (1) Ajout de la fonctionnalité de traduction intégrale de PDF ; (2) Ajout d'une fonctionnalité de changement de position de zone de saisie ; (3) Ajout d'une option de disposition verticale ; (4) Optimisation du plugin de fonction multi-thread.
|
||||
- version 2.3 : Amélioration de l'interactivité multi-thread
|
||||
- version 2.2 : Prise en charge du rechargement à chaud du plugin de fonction
|
||||
- version 2.1 : Mise en page pliable
|
||||
- version 2.0 : Introduction du plugin de fonction modulaire
|
||||
- version 1.0 : Fonctionnalité de base
|
||||
5. Traduire d'autres projets open source n'est pas un problème.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="500" >
|
||||
</div>
|
||||
|
||||
## Références et apprentissage
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="500" >
|
||||
</div>
|
||||
|
||||
6. Fonction de décoration de live2d (désactivée par défaut, nécessite une modification de config.py).
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500" >
|
||||
</div>
|
||||
|
||||
7. Prise en charge du modèle de langue MOSS.
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500" >
|
||||
</div>
|
||||
|
||||
8. Génération d'images OpenAI.
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
|
||||
</div>
|
||||
|
||||
9. Analyse et synthèse vocales OpenAI.
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
|
||||
</div>
|
||||
|
||||
10. Correction de la totalité des erreurs de Latex.
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" width="500" >
|
||||
</div>
|
||||
|
||||
|
||||
## Versions :
|
||||
- version 3.5 (À faire) : appel de toutes les fonctions de plugin de ce projet en langage naturel (priorité élevée)
|
||||
- version 3.4 (À faire) : amélioration du support multi-thread de chatglm en local
|
||||
- version 3.3 : Fonctionnalité intégrée d'informations d'internet
|
||||
- version 3.2 : La fonction du plugin de fonction prend désormais en charge des interfaces de paramètres plus nombreuses (fonction de sauvegarde, décodage de n'importe quel langage de code + interrogation simultanée de n'importe quelle combinaison de LLM)
|
||||
- version 3.1 : Prise en charge de l'interrogation simultanée de plusieurs modèles GPT ! Support api2d, équilibrage de charge multi-clé api.
|
||||
- version 3.0 : Prise en charge de chatglm et autres LLM de petite taille.
|
||||
- version 2.6 : Refonte de la structure des plugins, amélioration de l'interactivité, ajout de plus de plugins.
|
||||
- version 2.5 : Auto-mise à jour, résolution des problèmes de texte trop long et de dépassement de jetons lors de la compilation du projet global.
|
||||
- version 2.4 : (1) Nouvelle fonction de traduction de texte intégral PDF ; (2) Nouvelle fonction de permutation de position de la zone d'entrée ; (3) Nouvelle option de mise en page verticale ; (4) Amélioration des fonctions multi-thread de plug-in.
|
||||
- version 2.3 : Amélioration de l'interactivité multithread.
|
||||
- version 2.2 : Les plugins de fonctions peuvent désormais être rechargés à chaud.
|
||||
- version 2.1 : Disposition pliable
|
||||
- version 2.0 : Introduction de plugins de fonctions modulaires
|
||||
- version 1.0 : Fonctionnalités de base
|
||||
|
||||
gpt_academic développeur QQ groupe-2:610599535
|
||||
|
||||
- Problèmes connus
|
||||
- Certains plugins de traduction de navigateur perturbent le fonctionnement de l'interface frontend de ce logiciel
|
||||
- Des versions gradio trop hautes ou trop basses provoquent de nombreuses anomalies
|
||||
|
||||
## Référence et apprentissage
|
||||
|
||||
```
|
||||
De nombreux designs d'autres projets exceptionnels ont été utilisés pour référence dans le code, notamment :
|
||||
De nombreux autres excellents projets ont été référencés dans le code, notamment :
|
||||
|
||||
# Projet 1 : De nombreuses astuces ont été empruntées à ChuanhuChatGPT
|
||||
# Projet 1 : ChatGLM-6B de Tsinghua :
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
|
||||
# Projet 2 : JittorLLMs de Tsinghua :
|
||||
https://github.com/Jittor/JittorLLMs
|
||||
|
||||
# Projet 3 : Edge-GPT :
|
||||
https://github.com/acheong08/EdgeGPT
|
||||
|
||||
# Projet 4 : ChuanhuChatGPT :
|
||||
https://github.com/GaiZhenbiao/ChuanhuChatGPT
|
||||
|
||||
# Projet 2 : ChatGLM-6B de Tsinghua :
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
```
|
||||
# Projet 5 : ChatPaper :
|
||||
https://github.com/kaixindelele/ChatPaper
|
||||
|
||||
# Plus :
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
```
|
||||
@@ -2,301 +2,328 @@
|
||||
>
|
||||
> このReadmeファイルは、このプロジェクトのmarkdown翻訳プラグインによって自動的に生成されたもので、100%正確ではない可能性があります。
|
||||
>
|
||||
|
||||
# <img src="logo.png" width="40" > ChatGPT 学術最適化
|
||||
|
||||
**このプロジェクトが好きだったら、スターをつけてください。もし、より使いやすい学術用のショートカットキーまたはファンクションプラグインを発明した場合は、issueを発行するかpull requestを作成してください。また、このプロジェクト自体によって翻訳されたREADMEは[英語説明書|](docs/README_EN.md)[日本語説明書|](docs/README_JP.md)[ロシア語説明書|](docs/README_RS.md)[フランス語説明書](docs/README_FR.md)もあります。**
|
||||
|
||||
> **注意事項**
|
||||
> When installing dependencies, please strictly choose the versions specified in `requirements.txt`.
|
||||
>
|
||||
> 1. **赤色**のラベルが付いているファンクションプラグイン(ボタン)のみファイルを読み込めます。一部のプラグインはプラグインエリアのドロップダウンメニューにあります。新しいプラグインのPRを歓迎いたします!
|
||||
> `pip install -r requirements.txt`
|
||||
>
|
||||
> 2. このプロジェクトの各ファイルの機能は`self_analysis.md`(自己解析レポート)で詳しく説明されています。バージョンが追加されると、関連するファンクションプラグインをクリックして、GPTを呼び出して自己解析レポートを再生成することができます。一般的な質問は`wiki`にまとめられています。(`https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98`)
|
||||
|
||||
# <img src="logo.png" width="40" > GPT 学术优化 (GPT Academic)
|
||||
|
||||
**もしこのプロジェクトが好きなら、星をつけてください。もしあなたがより良いアカデミックショートカットまたは機能プラグインを思いついた場合、Issueをオープンするか pull request を送信してください。私たちはこのプロジェクト自体によって翻訳された[英語 |](README_EN.md)[日本語 |](README_JP.md)[한국어 |](https://github.com/mldljyh/ko_gpt_academic)[Русский |](README_RS.md)[Français](README_FR.md)のREADMEも用意しています。
|
||||
GPTを使った任意の言語にこのプロジェクトを翻訳するには、[`multi_language.py`](multi_language.py)を読んで実行してください。 (experimental)。
|
||||
|
||||
> **注意**
|
||||
>
|
||||
> 1. **赤色**で表示された関数プラグイン(ボタン)のみ、ファイルの読み取りをサポートしています。一部のプラグインは、プラグインエリアの**ドロップダウンメニュー**内にあります。また、私たちはどんな新しいプラグインのPRでも、**最優先**で歓迎し、処理します!
|
||||
>
|
||||
> 2. このプロジェクトの各ファイルの機能は、自己解析の詳細説明書である[`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)で説明されています。バージョンが進化するにつれて、関連する関数プラグインをいつでもクリックし、GPTを呼び出してプロジェクトの自己解析レポートを再生成することができます。よくある問題は[`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)にまとめられています。[インストール方法](#installation)。
|
||||
|
||||
> 3. このプロジェクトは、chatglmやRWKV、パンクなど、国内の大規模自然言語モデルを利用することをサポートし、試みることを奨励します。複数のAPIキーを共存することができ、設定ファイルに`API_KEY="openai-key1,openai-key2,api2d-key3"`のように記入することができます。`API_KEY`を一時的に変更する場合は、入力エリアに一時的な`API_KEY`を入力してEnterキーを押せば、それが有効になります。
|
||||
|
||||
|
||||
<div align="center">
|
||||
|
||||
機能 | 説明
|
||||
--- | ---
|
||||
ワンクリック整形 | 論文の文法エラーを一括で正確に修正できます。
|
||||
ワンクリック日英翻訳 | 日英翻訳には、ワンクリックで対応できます。
|
||||
ワンクリックコード説明 | コードの正しい表示と説明が可能です。
|
||||
[カスタムショートカットキー](https://www.bilibili.com/video/BV14s4y1E7jN) | カスタムショートカットキーをサポートします。
|
||||
[プロキシサーバーの設定](https://www.bilibili.com/video/BV1rc411W7Dr) | プロキシサーバーの設定をサポートします。
|
||||
モジュラーデザイン | カスタム高階関数プラグインと[関数プラグイン]、プラグイン[ホット更新]のサポートが可能です。詳細は[こちら](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
[自己プログラム解析](https://www.bilibili.com/video/BV1cj411A7VW) | [関数プラグイン][ワンクリック理解](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)このプロジェクトのソースコード
|
||||
[プログラム解析機能](https://www.bilibili.com/video/BV1cj411A7VW) | [関数プラグイン] ワンクリックで別のPython/C/C++/Java/Lua/...プロジェクトツリーを解析できます。
|
||||
論文読解 | [関数プラグイン] LaTeX論文の全文をワンクリックで解読し、要約を生成します。
|
||||
LaTeX全文翻訳、整形 | [関数プラグイン] ワンクリックでLaTeX論文を翻訳または整形できます。
|
||||
注釈生成 | [関数プラグイン] ワンクリックで関数の注釈を大量に生成できます。
|
||||
チャット分析レポート生成 | [関数プラグイン] 実行後、まとめレポートを自動生成します。
|
||||
[arxivヘルパー](https://www.bilibili.com/video/BV1LM4y1279X) | [関数プラグイン] 入力したarxivの記事URLで要約をワンクリック翻訳+PDFダウンロードができます。
|
||||
[PDF論文全文翻訳機能](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] PDF論文タイトルと要約を抽出し、全文を翻訳します(マルチスレッド)。
|
||||
[Google Scholar Integratorヘルパー](https://www.bilibili.com/video/BV19L411U7ia) | [関数プラグイン] 任意のGoogle Scholar検索ページURLを指定すると、gptが興味深い記事を選択します。
|
||||
数式/画像/テーブル表示 | 数式のTex形式とレンダリング形式を同時に表示できます。数式、コードのハイライトをサポートしています。
|
||||
マルチスレッド関数プラグインサポート | ChatGPTをマルチスレッドで呼び出すことができ、大量のテキストやプログラムを簡単に処理できます。
|
||||
ダークグラジオ[テーマ](https://github.com/binary-husky/chatgpt_academic/issues/173)の起動 | 「/?__dark-theme=true」というURLをブラウザに追加することで、ダークテーマに切り替えることができます。
|
||||
[多数のLLMモデル](https://www.bilibili.com/video/BV1wT411p7yf)をサポート、[API2D](https://api2d.com/)インターフェースをサポート | GPT3.5、GPT4、[清華ChatGLM](https://github.com/THUDM/ChatGLM-6B)による同時サポートは、とても素晴らしいですね!
|
||||
huggingface免科学上网[オンライン版](https://huggingface.co/spaces/qingxu98/gpt-academic) | huggingfaceにログイン後、[このスペース](https://huggingface.co/spaces/qingxu98/gpt-academic)をコピーしてください。
|
||||
...... | ......
|
||||
|
||||
|
||||
一键校正 | 一键で校正可能、論文の文法エラーを検索することができる
|
||||
一键中英翻訳 | 一键で中英翻訳可能
|
||||
一键コード解説 | コードを表示し、解説し、生成し、コードに注釈をつけることができる
|
||||
[自分でカスタマイズ可能なショートカットキー](https://www.bilibili.com/video/BV14s4y1E7jN) | 自分でカスタマイズ可能なショートカットキーをサポートする
|
||||
モジュール化された設計 | カスタマイズ可能な[強力な関数プラグイン](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions)をサポートし、プラグインは[ホットアップデート](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)に対応している
|
||||
[自己プログラム解析](https://www.bilibili.com/video/BV1cj411A7VW) | [関数プラグイン] [一键読解](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)このプロジェクトのソースコード
|
||||
プログラム解析 | [関数プラグイン] 一鍵で他のPython/C/C++/Java/Lua/...プロジェクトを分析できる
|
||||
論文の読み、[翻訳](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] LaTex/ PDF論文の全文を一鍵で読み解き、要約を生成することができる
|
||||
LaTex全文[翻訳](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[校正](https://www.bilibili.com/video/BV1FT411H7c5/) | [関数プラグイン] LaTex論文の翻訳または校正を一鍵で行うことができる
|
||||
一括で注釈を生成 | [関数プラグイン] 一鍵で関数に注釈をつけることができる
|
||||
Markdown[中英翻訳](https://www.bilibili.com/video/BV1yo4y157jV/) | [関数プラグイン] 上記の5種類の言語の[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)を見たことがありますか?
|
||||
チャット分析レポート生成 | [関数プラグイン] 実行後、自動的に概要報告書を生成する
|
||||
[PDF論文全文翻訳機能](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] PDF論文からタイトルと要約を抽出し、全文を翻訳する(マルチスレッド)
|
||||
[Arxivアシスタント](https://www.bilibili.com/video/BV1LM4y1279X) | [関数プラグイン] arxiv記事のURLを入力するだけで、要約を一鍵翻訳し、PDFをダウンロードできる
|
||||
[Google Scholar 総合アシスタント](https://www.bilibili.com/video/BV19L411U7ia) | [関数プラグイン] 任意のGoogle Scholar検索ページURLを指定すると、gptが[related works](https://www.bilibili.com/video/BV1GP411U7Az/)を作成する
|
||||
インターネット情報収集+GPT | [関数プラグイン] まずGPTに[インターネットから情報を収集](https://www.bilibili.com/video/BV1om4y127ck)してから質問に回答させ、情報が常に最新であるようにする
|
||||
数式/画像/表表示 | 数式の[tex形式とレンダリング形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png)を同時に表示し、数式、コードハイライトをサポートしている
|
||||
マルチスレッド関数プラグインがサポートされている | chatgptをマルチスレッドで呼び出し、[大量のテキスト](https://www.bilibili.com/video/BV1FT411H7c5/)またはプログラムを一鍵で処理できる
|
||||
ダークグラジオ[テーマの起動](https://github.com/binary-husky/gpt_academic/issues/173) | ブラウザのURLの後ろに```/?__theme=dark```を追加すると、ダークテーマを切り替えることができます。
|
||||
[多数のLLMモデル](https://www.bilibili.com/video/BV1wT411p7yf)がサポートされ、[API2D](https://api2d.com/)がサポートされている | 同時にGPT3.5、GPT4、[清華ChatGLM](https://github.com/THUDM/ChatGLM-6B)、[復旦MOSS](https://github.com/OpenLMLab/MOSS)に対応
|
||||
より多くのLLMモデルが接続され、[huggingfaceデプロイ](https://huggingface.co/spaces/qingxu98/gpt-academic)がサポートされている | Newbingインターフェイス(Newbing)、清華大学の[Jittorllm](https://github.com/Jittor/JittorLLMs)のサポート[LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV)と[盘古α](https://openi.org.cn/pangu/)
|
||||
さらに多くの新機能(画像生成など)を紹介する... | この文書の最後に示す...
|
||||
</div>
|
||||
|
||||
|
||||
- 新しいインターフェース(config.pyのLAYOUTオプションを変更するだけで、「左右レイアウト」と「上下レイアウト」を切り替えることができます)
|
||||
- 新しいインターフェース(`config.py`のLAYOUTオプションを変更することで、「左右配置」と「上下配置」を切り替えることができます)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
|
||||
</div>
|
||||
</div>- All buttons are dynamically generated by reading functional.py, and custom functions can be freely added to free the clipboard.
|
||||
|
||||
|
||||
- すべてのボタンは、functional.pyを読み込んで動的に生成されます。カスタム機能を自由に追加して、クリップボードを解放します
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- 色を修正/修正
|
||||
- Polishing/Correction
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- 出力に数式が含まれている場合、TeX形式とレンダリング形式の両方が表示され、コピーと読み取りが容易になります
|
||||
- If the output contains formulas, they are displayed in both TeX and rendering forms, making it easy to copy and read.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
|
||||
</div>
|
||||
|
||||
- プロジェクトのコードを見るのが面倒?chatgptに整備されたプロジェクトを直接与えましょう
|
||||
- Don't feel like looking at the project code? Just ask chatgpt directly.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
|
||||
</div>
|
||||
|
||||
- 多数の大規模言語モデルの混合呼び出し(ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
|
||||
|
||||
- Mixed calls of multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
||||
</div>
|
||||
|
||||
多数の大規模言語モデルの混合呼び出し[huggingfaceテスト版](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta)(huggigface版はchatglmをサポートしていません)
|
||||
|
||||
|
||||
---
|
||||
|
||||
## インストール-方法1:直接運転 (Windows、LinuxまたはMacOS)
|
||||
# Installation
|
||||
|
||||
## Installation-Method 1: Directly run (Windows, Linux or MacOS)
|
||||
|
||||
1. Download the project.
|
||||
|
||||
1. プロジェクトをダウンロードします。
|
||||
```sh
|
||||
git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
cd chatgpt_academic
|
||||
git clone https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
```
|
||||
|
||||
2. API_KEYとプロキシ設定を構成する
|
||||
2. Configure the API_KEY.
|
||||
|
||||
`config.py`で、海外のProxyとOpenAI API KEYを構成して説明します。
|
||||
```
|
||||
1.あなたが中国にいる場合、OpenAI APIをスムーズに使用するには海外プロキシを設定する必要があります。構成の詳細については、config.py(1.その中のUSE_PROXYをTrueに変更し、2.手順に従ってプロキシを変更する)を詳細に読んでください。
|
||||
2. OpenAI API KEYを構成する。OpenAIのウェブサイトでAPI KEYを取得してください。一旦API KEYを手に入れると、config.pyファイルで設定するだけです。
|
||||
3.プロキシネットワークに関連する問題(ネットワークタイムアウト、プロキシが動作しない)をhttps://github.com/binary-husky/chatgpt_academic/issues/1にまとめました。
|
||||
```
|
||||
(P.S. プログラム実行時にconfig.pyの隣にconfig_private.pyという名前のプライバシー設定ファイルを作成し、同じ名前の設定を上書きするconfig_private.pyが存在するかどうかを優先的に確認します。そのため、私たちの構成読み取りロジックを理解できる場合は、config.pyの隣にconfig_private.pyという名前の新しい設定ファイルを作成し、その中のconfig.pyから設定を移動してください。config_private.pyはgitで保守されていないため、プライバシー情報をより安全にすることができます。)
|
||||
Configure the API KEY and other settings in `config.py` and [special network environment settings](https://github.com/binary-husky/gpt_academic/issues/1).
|
||||
|
||||
(P.S. When the program is running, it will first check if there is a private configuration file named `config_private.py`, and use the configuration in it to override the same name configuration in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Reading priority: `environment variables` > `config_private.py` > `config.py`)
|
||||
|
||||
3. Install dependencies.
|
||||
|
||||
3. 依存関係をインストールします。
|
||||
```sh
|
||||
# 選択肢があります。
|
||||
# (Choose I: If familiar with Python)(Python version 3.9 or above, the newer the better) Note: Use the official pip source or Ali pip source. Temporary switching source method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
|
||||
# (選択肢2) もしAnacondaを使用する場合、手順は同様です:
|
||||
# (選択肢2.1) conda create -n gptac_venv python=3.11
|
||||
# (選択肢2.2) conda activate gptac_venv
|
||||
# (選択肢2.3) python -m pip install -r requirements.txt
|
||||
|
||||
# 注: 公式のpipソースまたはAlibabaのpipソースを使用してください。 別のpipソース(例:一部の大学のpip)は問題が発生する可能性があります。 一時的なソースの切り替え方法:
|
||||
# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
# (Choose II: If not familiar with Python) Use anaconda, the steps are the same (https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # Create anaconda environment.
|
||||
conda activate gptac_venv # Activate the anaconda environment.
|
||||
python -m pip install -r requirements.txt # This step is the same as the pip installation step.
|
||||
```
|
||||
|
||||
もしあなたが清華ChatGLMをサポートする必要がある場合、さらに多くの依存関係をインストールする必要があります(Pythonに慣れない方やコンピューターの設定が十分でない方は、試みないことをお勧めします):
|
||||
<details><summary>If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, click to expand.</summary>
|
||||
<p>
|
||||
|
||||
[Optional Steps] If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, you need to install more dependencies (precondition: familiar with Python + used Pytorch + computer configuration). Strong enough):
|
||||
|
||||
```sh
|
||||
# Optional step I: support Tsinghua ChatGLM. Tsinghua ChatGLM remarks: If you encounter the error "Call ChatGLM fail cannot load ChatGLM parameters normally", refer to the following: 1: The version installed above is torch+cpu version, using cuda requires uninstalling torch and reinstalling torch+cuda; 2: If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True).
|
||||
python -m pip install -r request_llm/requirements_chatglm.txt
|
||||
|
||||
# Optional Step II: Support Fudan MOSS.
|
||||
python -m pip install -r request_llm/requirements_moss.txt
|
||||
git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note that when executing this line of code, it must be in the project root.
|
||||
|
||||
# 【Optional Step III】Ensure that the AVAIL_LLM_MODELS in the config.py configuration file contains the expected model. Currently, all supported models are as follows (jittorllms series currently only supports the docker solution):
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
```
|
||||
|
||||
4. 実行
|
||||
</p>
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
4. Run.
|
||||
|
||||
```sh
|
||||
python main.py
|
||||
```5. Testing Function Plugin
|
||||
```
|
||||
- Test function plugin template function (requires gpt to answer what happened today in history), you can use this function as a template to implement more complex functions
|
||||
Click "[Function Plugin Template Demo] Today in History"
|
||||
```
|
||||
|
||||
5. 関数プラグインのテスト
|
||||
```
|
||||
- Pythonプロジェクト分析のテスト
|
||||
入力欄に `./crazy_functions/test_project/python/dqn` と入力し、「Pythonプロジェクト全体の解析」をクリックします。
|
||||
- 自己コード解読のテスト
|
||||
「[マルチスレッドデモ] このプロジェクト自体を解析します(ソースを翻訳して解読します)」をクリックします。
|
||||
- 実験的な機能テンプレート関数のテスト(GPTが「今日の歴史」に何が起こったかを回答することが求められます)。この関数をテンプレートとして使用して、より複雑な機能を実装できます。
|
||||
「[関数プラグインテンプレートデモ] 今日の歴史」をクリックします。
|
||||
- 関数プラグインエリアのドロップダウンメニューには他にも選択肢があります。
|
||||
```
|
||||
## Installation-Methods 2: Using Docker
|
||||
|
||||
## インストール方法2:Dockerを使用する(Linux)
|
||||
1. Only ChatGPT (recommended for most people)
|
||||
|
||||
1. ChatGPTのみ(大多数の人にお勧めです)
|
||||
``` sh
|
||||
# プロジェクトのダウンロード
|
||||
git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
cd chatgpt_academic
|
||||
# 海外プロキシとOpenAI API KEYの設定
|
||||
config.pyを任意のテキストエディタで編集する
|
||||
# インストール
|
||||
docker build -t gpt-academic .
|
||||
# 実行
|
||||
``` sh
|
||||
git clone https://github.com/binary-husky/gpt_academic.git # Download project
|
||||
cd gpt_academic # Enter path
|
||||
nano config.py # Edit config.py with any text editor ‑ configure "Proxy," "API_KEY," "WEB_PORT" (e.g., 50923) and more
|
||||
docker build -t gpt-academic . # installation
|
||||
|
||||
#(Last step-Option 1) In a Linux environment, `--net=host` is more convenient and quick
|
||||
docker run --rm -it --net=host gpt-academic
|
||||
|
||||
# 関数プラグインのテスト
|
||||
## 関数プラグインテンプレート関数のテスト(GPTが「今日の歴史」に何が起こったかを回答することが求められます)。この関数をテンプレートとして使用して、より複雑な機能を実装できます。
|
||||
「[関数プラグインテンプレートデモ] 今日の歴史」をクリックします。
|
||||
## Latexプロジェクトの要約を書くテスト
|
||||
入力欄に./crazy_functions/test_project/latex/attentionと入力し、「テックス論文を読んで要約を書く」をクリックします。
|
||||
## Pythonプロジェクト分析のテスト
|
||||
入力欄に./crazy_functions/test_project/python/dqnと入力し、[Pythonプロジェクトの全解析]をクリックします。
|
||||
|
||||
関数プラグインエリアのドロップダウンメニューには他にも選択肢があります。
|
||||
#(Last step-Option 2) In a macOS/windows environment, the -p option must be used to expose the container port (e.g., 50923) to the port on the host.
|
||||
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
||||
```
|
||||
|
||||
2. ChatGPT + ChatGLM(Dockerに非常に詳しい人+十分なコンピューター設定が必要)
|
||||
2. ChatGPT + ChatGLM + MOSS (requires familiarity with Docker)
|
||||
|
||||
|
||||
|
||||
```sh
|
||||
# Dockerfileの編集
|
||||
cd docs && nano Dockerfile+ChatGLM
|
||||
# ビルド方法
|
||||
docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
|
||||
# 実行方法 (1) 直接実行:
|
||||
docker run --rm -it --net=host --gpus=all gpt-academic
|
||||
# 実行方法 (2) コンテナに入って調整する:
|
||||
docker run --rm -it --net=host --gpus=all gpt-academic bash
|
||||
``` sh
|
||||
# Modify docker-compose.yml, delete plans 1 and 3, and retain plan 2. Modify the configuration of plan 2 in docker-compose.yml, and reference the comments for instructions.
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
## インストール方法3:その他のデプロイ方法
|
||||
|
||||
1. クラウドサーバーデプロイ
|
||||
[デプロイwiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
|
||||
2. WSL2を使用 (Windows Subsystem for Linux)
|
||||
[デプロイwiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
3. ChatGPT + LLAMA + Pangu + RWKV (requires familiarity with Docker)
|
||||
``` sh
|
||||
# Modify docker-compose.yml, delete plans 1 and 2, and retain plan 3. Modify the configuration of plan 3 in docker-compose.yml, and reference the comments for instructions.
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
|
||||
## インストール-プロキシ設定
|
||||
1. 通常の方法
|
||||
[プロキシを設定する](https://github.com/binary-husky/chatgpt_academic/issues/1)
|
||||
## Installation-Method 3: Other Deployment Methods
|
||||
|
||||
2. 初心者向けチュートリアル
|
||||
[初心者向けチュートリアル](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
|
||||
1. How to use proxy URL/Microsoft Azure API
|
||||
Configure API_URL_REDIRECT according to the instructions in `config.py`.
|
||||
|
||||
2. Remote Cloud Server Deployment (requires cloud server knowledge and experience)
|
||||
Please visit [Deployment Wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
|
||||
3. Using WSL2 (Windows Subsystem for Linux Subsystem)
|
||||
Please visit [Deployment Wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
|
||||
4. How to run on a secondary URL (such as `http://localhost/subpath`)
|
||||
Please visit [FastAPI Running Instructions](docs/WithFastapi.md)
|
||||
|
||||
5. Run with docker-compose
|
||||
Please read docker-compose.yml and follow the instructions provided therein.
|
||||
---
|
||||
# Advanced Usage
|
||||
## Customize new convenience buttons/custom function plugins
|
||||
|
||||
## カスタムボタンの追加(学術ショートカットキー)
|
||||
|
||||
`core_functional.py`を任意のテキストエディタで開き、以下のエントリーを追加し、プログラムを再起動してください。(ボタンが追加されて表示される場合、前置詞と後置詞はホット編集がサポートされているため、プログラムを再起動せずに即座に有効になります。)
|
||||
|
||||
例:
|
||||
1. Custom new convenience buttons (academic shortcut keys)
|
||||
Open `core_functional.py` with any text editor, add the item as follows, and restart the program. (If the button has been added successfully and is visible, the prefix and suffix support hot modification without restarting the program.)
|
||||
example:
|
||||
```
|
||||
"超级英译中": {
|
||||
# 前置詞 - あなたの要求を説明するために使用されます。翻訳、コードの説明、編集など。
|
||||
"Prefix": "以下のコンテンツを中国語に翻訳して、マークダウンテーブルを使用して専門用語を説明してください。\n\n",
|
||||
"Super English to Chinese Translation": {
|
||||
# Prefix, which will be added before your input. For example, used to describe your request, such as translation, code interpretation, polish, etc.
|
||||
"Prefix": "Please translate the following content into Chinese, and explain the proper nouns in the text in a markdown table one by one:\n\n",
|
||||
|
||||
# 後置詞 - プレフィックスと共に使用すると、入力内容を引用符で囲むことができます。
|
||||
# Suffix, which will be added after your input. For example, in combination with the prefix, you can surround your input content with quotation marks.
|
||||
"Suffix": "",
|
||||
},
|
||||
```
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. Custom function plugins
|
||||
|
||||
Write powerful function plugins to perform any task you can and cannot think of.
|
||||
The difficulty of writing and debugging plugins in this project is low, and as long as you have a certain amount of python basic knowledge, you can follow the template provided by us to achieve your own plugin functions.
|
||||
For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
|
||||
|
||||
---
|
||||
|
||||
## いくつかの機能の例
|
||||
|
||||
### 画像表示:
|
||||
|
||||
# Latest Update
|
||||
## New feature dynamics.
|
||||
1. ダイアログの保存機能。関数プラグインエリアで '現在の会話を保存' を呼び出すと、現在のダイアログを読み取り可能で復元可能なHTMLファイルとして保存できます。さらに、関数プラグインエリア(ドロップダウンメニュー)で 'ダイアログの履歴保存ファイルを読み込む' を呼び出すことで、以前の会話を復元することができます。Tips:ファイルを指定せずに 'ダイアログの履歴保存ファイルを読み込む' をクリックすることで、過去のHTML保存ファイルのキャッシュを表示することができます。'すべてのローカルダイアログの履歴を削除' をクリックすることで、すべてのHTML保存ファイルのキャッシュを削除できます。
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/235222390-24a9acc0-680f-49f5-bc81-2f3161f1e049.png" width="500">
|
||||
</div>
|
||||
|
||||
|
||||
### プログラムが自己解析できる場合:
|
||||
|
||||
2. 報告書を生成します。ほとんどのプラグインは、実行が終了した後に作業報告書を生成します。
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300">
|
||||
</div>
|
||||
|
||||
3. モジュール化された機能設計、簡単なインターフェースで強力な機能をサポートする。
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400">
|
||||
</div>
|
||||
|
||||
### 他のPython/Cppプロジェクトの解析:
|
||||
|
||||
4. 自己解決可能なオープンソースプロジェクトです。
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
|
||||
</div>
|
||||
|
||||
### Latex論文の一括読解と要約生成
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
|
||||
</div>
|
||||
|
||||
### 自動報告生成
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
|
||||
</div>
|
||||
|
||||
### モジュール化された機能デザイン
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="500">
|
||||
</div>
|
||||
|
||||
|
||||
### ソースコードの英語翻訳
|
||||
|
||||
5. 他のオープンソースプロジェクトの解読、容易である。
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="500">
|
||||
</div>
|
||||
|
||||
## Todo およびバージョン計画:
|
||||
- version 3.2+ (todo): 関数プラグインがより多くのパラメーターインターフェースをサポートするようになります。
|
||||
- version 3.1: 複数のgptモデルを同時にクエリし、api2dをサポートし、複数のapikeyの負荷分散をサポートします。
|
||||
- version 3.0: chatglmおよび他の小型llmのサポート
|
||||
- version 2.6: プラグイン構造を再構成し、相互作用性を高め、より多くのプラグインを追加しました。
|
||||
- version 2.5: 自己更新。総括的な大規模プロジェクトのソースコードをまとめた場合、テキストが長すぎる、トークンがオーバーフローする問題を解決します。
|
||||
- version 2.4: (1)PDF全文翻訳機能を追加。(2)入力エリアの位置を切り替える機能を追加。(3)垂直レイアウトオプションを追加。(4)マルチスレッド関数プラグインの最適化。
|
||||
- version 2.3: 多スレッドの相互作用性を向上させました。
|
||||
- version 2.2: 関数プラグインでホットリロードをサポート
|
||||
- version 2.1: 折りたたみ式レイアウト
|
||||
- version 2.0: モジュール化された関数プラグインを導入
|
||||
- version 1.0: 基本機能
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="500">
|
||||
</div>
|
||||
|
||||
## 参考および学習
|
||||
6. [Live2D](https://github.com/fghrsh/live2d_demo)のデコレート小機能です。(デフォルトでは閉じてますが、 `config.py`を変更する必要があります。)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500">
|
||||
</div>
|
||||
|
||||
7. 新たにMOSS大言語モデルのサポートを追加しました。
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500">
|
||||
</div>
|
||||
|
||||
8. OpenAI画像生成
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500">
|
||||
</div>
|
||||
|
||||
9. OpenAIオーディオの解析とサマリー
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500">
|
||||
</div>
|
||||
|
||||
10. 全文校正されたLaTeX
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" width="500">
|
||||
</div>
|
||||
|
||||
|
||||
以下は中国語のマークダウンファイルです。日本語に翻訳してください。既存のマークダウンコマンドを変更しないでください:
|
||||
## バージョン:
|
||||
- version 3.5(作業中):すべての関数プラグインを自然言語で呼び出すことができるようにする(高い優先度)。
|
||||
- version 3.4(作業中):chatglmのローカルモデルのマルチスレッドをサポートすることで、機能を改善する。
|
||||
- version 3.3:+Web情報の総合機能
|
||||
- version 3.2:関数プラグインでさらに多くのパラメータインターフェイスをサポートする(ダイアログの保存機能、任意の言語コードの解読+同時に任意のLLM組み合わせに関する問い合わせ)
|
||||
- version 3.1:複数のGPTモデルを同時に質問できるようになりました! api2dをサポートし、複数のAPIキーを均等に負荷分散することができます。
|
||||
- version 3.0:chatglmとその他の小型LLMのサポート。
|
||||
- version 2.6:プラグイン構造を再構築し、対話内容を高め、より多くのプラグインを追加しました。
|
||||
- version 2.5:自己アップデートし、長文書やトークンのオーバーフローの問題を解決しました。
|
||||
- version 2.4:(1)全文翻訳のPDF機能を追加しました。(2)入力エリアの位置切り替え機能を追加しました。(3)垂直レイアウトオプションを追加しました。(4)マルチスレッド関数プラグインを最適化しました。
|
||||
- version 2.3:マルチスレッド性能の向上。
|
||||
- version 2.2:関数プラグインのホットリロードをサポートする。
|
||||
- version 2.1:折りたたみ式レイアウト。
|
||||
- version 2.0:モジュール化された関数プラグインを導入。
|
||||
- version 1.0:基本機能
|
||||
|
||||
gpt_academic開発者QQグループ-2:610599535
|
||||
|
||||
- 既知の問題
|
||||
- 一部のブラウザ翻訳プラグインが、このソフトウェアのフロントエンドの実行を妨害する
|
||||
- gradioバージョンが高すぎるか低すぎると、多くの異常が引き起こされる
|
||||
|
||||
## 参考学習
|
||||
|
||||
```
|
||||
多くの優秀なプロジェクトの設計を参考にしています。主なものは以下の通りです:
|
||||
コードの中には、他の優れたプロジェクトの設計から参考にしたものがたくさん含まれています:
|
||||
|
||||
# 参考プロジェクト1:ChuanhuChatGPTから多くのテクニックを借用
|
||||
# プロジェクト1:清華ChatGLM-6B:
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
|
||||
# プロジェクト2:清華JittorLLMs:
|
||||
https://github.com/Jittor/JittorLLMs
|
||||
|
||||
# プロジェクト3:Edge-GPT:
|
||||
https://github.com/acheong08/EdgeGPT
|
||||
|
||||
# プロジェクト4:ChuanhuChatGPT:
|
||||
https://github.com/GaiZhenbiao/ChuanhuChatGPT
|
||||
|
||||
# 参考プロジェクト2:清華ChatGLM-6B:
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
```
|
||||
# プロジェクト5:ChatPaper:
|
||||
https://github.com/kaixindelele/ChatPaper
|
||||
|
||||
# その他:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
```
|
||||
@@ -2,204 +2,197 @@
|
||||
>
|
||||
> Этот файл самовыражения автоматически генерируется модулем перевода markdown в этом проекте и может быть не на 100% правильным.
|
||||
>
|
||||
# <img src="logo.png" width="40" > GPT Академическая оптимизация (GPT Academic)
|
||||
|
||||
# <img src="logo.png" width="40" > ChatGPT Academic Optimization
|
||||
|
||||
**Если вам понравился этот проект, пожалуйста, поставьте ему звезду. Если вы придумали более полезные академические ярлыки или функциональные плагины, не стесняйтесь создавать запросы на изменение или пул-запросы. Мы также имеем [README на английском языке](docs/README_EN.md), переведенный этим же проектом.
|
||||
**Если вам нравится этот проект, пожалуйста, поставьте ему звезду. Если вы придумали более полезные языковые ярлыки или функциональные плагины, не стесняйтесь открывать issue или pull request.
|
||||
Чтобы перевести этот проект на произвольный язык с помощью GPT, ознакомьтесь и запустите [`multi_language.py`](multi_language.py) (экспериментальный).
|
||||
|
||||
> **Примечание**
|
||||
>
|
||||
> 1. Пожалуйста, обратите внимание, что только функциonal plugins (buttons) с **красным цветом** могут читать файлы, некоторые из которых находятся в **выпадающем меню** плагинов. Кроме того, мы приветствуем и обрабатываем любые новые плагины с **наивысшим приоритетом**!
|
||||
> 1. Обратите внимание, что только функциональные плагины (кнопки), помеченные **красным цветом**, поддерживают чтение файлов, некоторые плагины находятся в **выпадающем меню** в области плагинов. Кроме того, мы с наивысшим приоритетом рады и обрабатываем pull requests для любых новых плагинов!
|
||||
>
|
||||
> 2. Функции каждого файла в этом проекте подробно описаны в собственном анализе [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) . При повторных итерациях вы также можете вызывать обновленный отчет функций проекта, щелкнув соответствующий функциональный плагин GPT. Часто задаваемые вопросы собраны в [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98) .
|
||||
|
||||
<div align="center">
|
||||
|
||||
Функция | Описание
|
||||
--- | ---
|
||||
Редактирование одним кликом | Поддержка редактирования одним кликом, поиск грамматических ошибок в академических статьях
|
||||
Переключение языков "Английский-Китайский" одним кликом | Одним кликом переключайте языки "Английский-Китайский"
|
||||
Разъяснение программного кода одним кликом | Вы можете правильно отобразить и объяснить программный код.
|
||||
[Настраиваемые сочетания клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настраиваемых сочетаний клавиш
|
||||
[Настройка сервера-прокси](https://www.bilibili.com/video/BV1rc411W7Dr) | Поддержка настройки сервера-прокси
|
||||
Модульный дизайн | Поддержка настраиваемых функциональных плагинов высших порядков и функциональных плагинов, поддерживающих [горячее обновление](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
[Автоанализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Прочтение в один клик](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) кода программы проекта
|
||||
[Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Один клик для проанализирования дерева других проектов Python/C/C++/Java/Lua/...
|
||||
Чтение статей| [Функциональный плагин] Одним кликом прочитайте весь латех (LaTex) текст статьи и сгенерируйте краткое описание
|
||||
Перевод и редактирование всех статей из LaTex | [Функциональный плагин] Перевод или редактирование LaTex-статьи всего одним нажатием кнопки
|
||||
Генерация комментариев в пакетном режиме | [Функциональный плагин] Одним кликом сгенерируйте комментарии к функциям в пакетном режиме
|
||||
Генерация отчетов пакета CHAT | [Функциональный плагин] Автоматически создавайте сводные отчеты после выполнения
|
||||
[Помощник по arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи arxiv, чтобы легко перевести резюме и загрузить PDF-файл
|
||||
[Перевод полного текста статьи в формате PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлеките заголовок статьи, резюме и переведите весь текст статьи (многопоточно)
|
||||
[Помощник интеграции Google Scholar](https://www.bilibili.com/video/BV19L411U7ia) | [Функциональный плагин] Дайте GPT выбрать для вас интересные статьи на любой странице поиска Google Scholar.
|
||||
Отображение формул/изображений/таблиц | Одновременно отображается tex-форма и рендер-форма формул, поддержка формул, высокоскоростных кодов
|
||||
Поддержка функциональных плагинов многопоточности | Поддержка многопоточной работы с плагинами, обрабатывайте огромные объемы текста или программы одним кликом
|
||||
Запуск темной темы gradio[подробнее](https://github.com/binary-husky/chatgpt_academic/issues/173) | Добавьте / ?__dark-theme=true в конец URL браузера, чтобы переключиться на темную тему.
|
||||
[Поддержка нескольких моделей LLM](https://www.bilibili.com/video/BV1wT411p7yf), поддержка API2D | Находиться между GPT3.5, GPT4 и [清华ChatGLM](https://github.com/THUDM/ChatGLM-6B) должно быть очень приятно, не так ли?
|
||||
Альтернатива huggingface без использования научной сети [Онлайн-эксперимент](https://huggingface.co/spaces/qingxu98/gpt-academic) | Войдите в систему, скопируйте пространство [этот пространственный URL](https://huggingface.co/spaces/qingxu98/gpt-academic)
|
||||
…… | ……
|
||||
|
||||
|
||||
</div>
|
||||
|
||||
- Новый интерфейс (вы можете изменить настройку LAYOUT в config.py, чтобы переключаться между "горизонтальным расположением" и "вертикальным расположением")
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
|
||||
</div>
|
||||
> 2. В каждом файле проекта функциональность описана в документе самоанализа [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). С каждой итерацией выполнения версии вы можете в любое время вызвать повторное создание отчета о самоанализе этого проекта, щелкнув соответствующий функциональный плагин и вызвав GPT. Вопросы сборки описаны в [`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Метод установки](#installation).
|
||||
>
|
||||
> 3. Этот проект совместим и поощряет использование китайских языковых моделей chatglm и RWKV, пангу и т. Д. Поддержка нескольких api-key, которые могут существовать одновременно, может быть указан в файле конфигурации, например `API_KEY="openai-key1,openai-key2,api2d-key3"`. Если требуется временно изменить `API_KEY`, введите временный `API_KEY` в области ввода и нажмите клавишу Enter, чтобы он вступил в силу.
|
||||
|
||||
> **Примечание**
|
||||
>
|
||||
> При установке зависимостей строго выбирайте версии, **указанные в файле requirements.txt**.
|
||||
>
|
||||
> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`## Задание
|
||||
|
||||
Вы профессиональный переводчик научных статей.
|
||||
|
||||
- Все кнопки генерируются динамически путем чтения functional.py и могут быть легко настроены под пользовательские потребности, освобождая буфер обмена.
|
||||
Переведите этот файл в формате Markdown на русский язык. Не изменяйте существующие команды Markdown, ответьте только переведенными результатами.
|
||||
|
||||
## Результат
|
||||
|
||||
Функция | Описание
|
||||
--- | ---
|
||||
Однокнопочный стиль | Поддержка однокнопочного стиля и поиска грамматических ошибок в научных статьях
|
||||
Однокнопочный перевод на английский и китайский | Однокнопочный перевод на английский и китайский
|
||||
Однокнопочное объяснение кода | Показ кода, объяснение его, генерация кода, комментирование кода
|
||||
[Настройка быстрых клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настройки быстрых клавиш
|
||||
Модульный дизайн | Поддержка пользовательских функциональных плагинов мощных [функциональных плагинов](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), плагины поддерживают [горячую замену](https://github.com/binary-husky/gpt_academic/wiki/Function-Plug-in-Guide)
|
||||
[Анализ своей программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Однокнопочный просмотр](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academicProject-Self-analysis-Report) исходного кода этого проекта
|
||||
[Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Однокнопочный анализ дерева других проектов Python/C/C++/Java/Lua/...
|
||||
Чтение статей, [перевод](https://www.bilibili.com/video/BV1KT411x7Wn) статей | [Функциональный плагин] Однокнопочное чтение полного текста научных статей и генерация резюме
|
||||
Полный перевод [LaTeX](https://www.bilibili.com/video/BV1nk4y1Y7Js/) и совершенствование | [Функциональный плагин] Однокнопочный перевод или совершенствование LaTeX статьи
|
||||
Автоматическое комментирование | [Функциональный плагин] Однокнопочное автоматическое генерирование комментариев функций
|
||||
[Перевод](https://www.bilibili.com/video/BV1yo4y157jV/) Markdown на английский и китайский | [Функциональный плагин] Вы видели обе версии файлов [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) для этих 5 языков?
|
||||
Отчет о чат-анализе | [Функциональный плагин] После запуска будет автоматически сгенерировано сводное извещение
|
||||
Функция перевода полного текста [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлечение заголовка и резюме [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) и перевод всего документа (многопоточность)
|
||||
[Arxiv Helper](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи на arxiv и одним щелчком мыши переведите резюме и загрузите PDF
|
||||
[Google Scholar Integration Helper](https://www.bilibili.com/video/BV19L411U7ia) | [Функциональный плагин] При заданном любом URL страницы поиска в Google Scholar позвольте gpt вам помочь [написать обзор](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||
Сбор Интернет-информации + GPT | [Функциональный плагин] Однокнопочный [запрос информации из Интернета GPT](https://www.bilibili.com/video/BV1om4y127ck), затем ответьте на вопрос, чтобы информация не устарела никогда
|
||||
Отображение формул / изображений / таблиц | Может одновременно отображать формулы в [формате Tex и рендеринге](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), поддерживает формулы, подсвечивает код
|
||||
Поддержка функций с многопоточностью | Поддержка многопоточного вызова chatgpt, однокнопочная обработка [больших объемов текста](https://www.bilibili.com/video/BV1FT411H7c5/) или программ
|
||||
Темная тема gradio для запуска приложений | Добавьте ```/?__theme=dark``` после URL в браузере, чтобы переключиться на темную тему
|
||||
[Поддержка нескольких моделей LLM](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) | Они одновременно обслуживаются GPT3.5, GPT4, [Clear ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS)
|
||||
Подключение нескольких новых моделей LLM, поддержка деплоя[huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Подключение интерфейса Newbing (новый Bing), подключение поддержки [LLaMA](https://github.com/facebookresearch/llama), поддержка [RWKV](https://github.com/BlinkDL/ChatRWKV) и [Pangu α](https://openi.org.cn/pangu/)
|
||||
Больше новых функций (генерация изображения и т. д.) | См. на конце этого файла…- All buttons are dynamically generated by reading functional.py, and custom functions can be freely added to liberate the clipboard
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- Редактирование/корректирование
|
||||
- Revision/Correction
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- Если вывод содержит формулы, они отображаются одновременно как в формате tex, так и в рендеринговом формате для удобства копирования и чтения.
|
||||
- If the output contains formulas, they will be displayed in both tex and rendered form for easy copying and reading
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
|
||||
</div>
|
||||
|
||||
- Лень смотреть код проекта? Просто покажите chatgpt.
|
||||
- Don't feel like looking at project code? Show the entire project directly in chatgpt
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
|
||||
</div>
|
||||
|
||||
- Несколько моделей больших языковых моделей смешиваются (ChatGLM + OpenAI-GPT3.5 + [API2D] (https://api2d.com/) -GPT4)
|
||||
- Mixing multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
||||
</div>
|
||||
|
||||
Несколько моделей больших языковых моделей смешиваются в [бета-версии huggingface] (https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (huggingface-версия не поддерживает chatglm).
|
||||
|
||||
|
||||
---
|
||||
# Installation
|
||||
## Installation-Method 1: Run directly (Windows, Linux or MacOS)
|
||||
|
||||
## Установка - Метод 1: Запуск (Windows, Linux или MacOS)
|
||||
|
||||
1. Скачайте проект
|
||||
1. Download the project
|
||||
```sh
|
||||
git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
cd chatgpt_academic
|
||||
git clone https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
```
|
||||
|
||||
2. Настройка API_KEY и настройки прокси
|
||||
2. Configure API_KEY
|
||||
|
||||
В файле `config.py` настройте зарубежный прокси и OpenAI API KEY, пояснения ниже
|
||||
```
|
||||
1. Если вы находитесь в Китае, вам нужно настроить зарубежный прокси, чтобы использовать OpenAI API. Пожалуйста, внимательно прочитайте config.py для получения инструкций (1. Измените USE_PROXY на True; 2. Измените прокси в соответствии с инструкциями).
|
||||
2. Настройка API KEY OpenAI. Вам необходимо зарегистрироваться на сайте OpenAI и получить API KEY. После получения API KEY настройте его в файле config.py.
|
||||
3. Вопросы, связанные с сетевыми проблемами (тайм-аут сети, прокси не работает), можно найти здесь: https://github.com/binary-husky/chatgpt_academic/issues/1
|
||||
```
|
||||
(Примечание: при запуске программы будет проверяться наличие конфиденциального файла конфигурации с именем `config_private.py` и использоваться в нем конфигурация параметров, которая перезаписывает параметры с такими же именами в `config.py`. Поэтому, если вы понимаете логику чтения нашей конфигурации, мы настоятельно рекомендуем вам создать новый файл конфигурации с именем `config_private.py` рядом с `config.py` и переместить (скопировать) настройки из `config.py` в `config_private.py`. `config_private.py` не подвергается контролю git, что делает конфиденциальную информацию более безопасной.)
|
||||
In `config.py`, configure API KEY and other settings, [special network environment settings] (https://github.com/binary-husky/gpt_academic/issues/1).
|
||||
|
||||
(P.S. When the program is running, it will first check whether there is a secret configuration file named `config_private.py` and use the configuration in it to replace the same name in` config.py`. Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Priority of read: `environment variable`>`config_private.py`>`config.py`)
|
||||
|
||||
|
||||
3. Установить зависимости
|
||||
3. Install dependencies
|
||||
```sh
|
||||
# (Выбор 1) Рекомендуется
|
||||
# (Option I: If familiar with Python)(Python version 3.9 or above, the newer the better), note: use the official pip source or the aliyun pip source, temporary switching source method: python -m pip install -r requirements.txt - i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
# (Выбор 2) Если вы используете anaconda, то шаги будут аналогичны:
|
||||
# (Шаг 2.1) conda create -n gptac_venv python=3.11
|
||||
# (Шаг 2.2) conda activate gptac_venv
|
||||
# (Шаг 2.3) python -m pip install -r requirements.txt
|
||||
|
||||
# Примечание: используйте официальный источник pip или источник pip.aliyun.com. Другие источники pip могут вызывать проблемы. временный метод замены источника:
|
||||
# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
# (Option II: If unfamiliar with Python)Use Anaconda, the steps are also similar (https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # create an Anaconda environment
|
||||
conda activate gptac_venv # activate Anaconda environment
|
||||
python -m pip install -r requirements.txt # This step is the same as the pip installation
|
||||
```
|
||||
|
||||
Если требуется поддержка TUNA ChatGLM, необходимо установить дополнительные зависимости (если вы неудобны с python, необходимо иметь хорошую конфигурацию компьютера):
|
||||
<details><summary> If you need to support Tsinghua ChatGLM/Fudan MOSS as backend, click here to expand </summary>
|
||||
<p>
|
||||
|
||||
[Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as backend, you need to install more dependencies (prerequisites: familiar with Python + have used Pytorch + computer configuration is strong):
|
||||
```sh
|
||||
# [Optional step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM note: If you encounter the "Call ChatGLM fail cannot load ChatGLM parameters normally" error, refer to the following: 1: The default installation above is torch+cpu version, and cuda is used Need to uninstall torch and reinstall torch+cuda; 2: If you cannot load the model due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) Modify to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
||||
python -m pip install -r request_llm/requirements_chatglm.txt
|
||||
|
||||
# [Optional step II] Support Fudan MOSS
|
||||
python -m pip install -r request_llm/requirements_moss.txt
|
||||
git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note that when executing this line of code, you must be in the project root path
|
||||
|
||||
# [Optional step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently, all supported models are as follows (the jittorllms series currently only supports the docker solution):
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
```
|
||||
|
||||
4. Запустите
|
||||
</p>
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
4. Run
|
||||
```sh
|
||||
python main.py
|
||||
```5. Testing Function Plugin
|
||||
```
|
||||
- Testing function plugin template function (requires GPT to answer what happened in history today), you can use this function as a template to implement more complex functions
|
||||
Click "[Function plugin Template Demo] On this day in history"
|
||||
```
|
||||
|
||||
5. Тестовые функции плагина
|
||||
```
|
||||
- Тестирвоание анализа проекта Python
|
||||
В основной области введите `./crazy_functions/test_project/python/dqn` , а затем нажмите "Анализировать весь проект Python"
|
||||
- Тестирование самостоятельного чтения кода
|
||||
Щелкните " [Демонстрационный режим многопоточности] Проанализируйте сам проект (расшифровка источника кода)"
|
||||
- Тестирование функций шаблонного плагина (вы можете использовать эту функцию как шаблон для более сложных функций, требующих ответа от gpt в связи с тем, что произошло сегодня в истории)
|
||||
Щелкните " [Функции шаблонного плагина] День в истории"
|
||||
- На нижней панели дополнительные функции для выбора
|
||||
```
|
||||
## Installation - Method 2: Using Docker
|
||||
|
||||
## Установка - Метод 2: Использование docker (Linux)
|
||||
1. ChatGPT only (recommended for most people)
|
||||
|
||||
|
||||
1. Только ChatGPT (рекомендуется для большинства пользователей):
|
||||
``` sh
|
||||
# Скачать проект
|
||||
git clone https://github.com/binary-husky/chatgpt_academic.git
|
||||
cd chatgpt_academic
|
||||
# Настроить прокси за границей и OpenAI API KEY
|
||||
Отредактируйте файл config.py в любом текстовом редакторе.
|
||||
# Установка
|
||||
docker build -t gpt-academic .
|
||||
# Запустить
|
||||
git clone https://github.com/binary-husky/gpt_academic.git # download the project
|
||||
cd gpt_academic # enter the path
|
||||
nano config.py # edit config.py with any text editor to configure "Proxy", "API_KEY", and "WEB_PORT" (eg 50923)
|
||||
docker build -t gpt-academic . # install
|
||||
|
||||
# (Last step-Option 1) In a Linux environment, using `--net=host` is more convenient and faster
|
||||
docker run --rm -it --net=host gpt-academic
|
||||
|
||||
# Проверка функциональности плагина
|
||||
## Проверка шаблонной функции плагина (требуется, чтобы gpt ответил, что произошло "в истории на этот день"), вы можете использовать эту функцию в качестве шаблона для реализации более сложных функций.
|
||||
Нажмите "[Шаблонный демонстрационный плагин] История на этот день".
|
||||
## Тест абстрактного резюме для проекта на Latex
|
||||
В области ввода введите ./crazy_functions/test_project/latex/attention, а затем нажмите "Чтение реферата о тезисах статьи на LaTeX".
|
||||
## Тестовый анализ проекта на Python
|
||||
Введите в область ввода ./crazy_functions/test_project/python/dqn, затем нажмите "Проанализировать весь проект на Python".
|
||||
|
||||
Выбирайте больше функциональных плагинов в нижнем выпадающем меню.
|
||||
# (Last step-Option 2) In macOS/windows environment, only -p option can be used to expose the port on the container (eg 50923) to the port on the host
|
||||
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
||||
```
|
||||
|
||||
2. ChatGPT + ChatGLM (требуется глубокое знание Docker и достаточно мощное компьютерное оборудование):
|
||||
2. ChatGPT + ChatGLM + MOSS (requires familiarity with Docker)
|
||||
|
||||
``` sh
|
||||
# Изменение Dockerfile
|
||||
cd docs && nano Dockerfile+ChatGLM
|
||||
# Как построить | Как запустить (Dockerfile+ChatGLM в пути docs, сначала перейдите в папку с помощью cd docs)
|
||||
docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
|
||||
# Как запустить | Как запустить (2) я хочу войти в контейнер и сделать какие-то настройки до запуска:
|
||||
docker run --rm -it --net=host --gpus=all gpt-academic bash
|
||||
# Edit docker-compose.yml, delete solutions 1 and 3, and keep solution 2. Modify the configuration of solution 2 in docker-compose.yml, refer to the comments in it
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
3. ChatGPT + LLAMA + PanGu + RWKV (requires familiarity with Docker)
|
||||
``` sh
|
||||
# Edit docker-compose.yml, delete solutions 1 and 2, and keep solution 3. Modify the configuration of solution 3 in docker-compose.yml, refer to the comments in it
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
|
||||
## Установка-Метод 3: Другие способы развертывания
|
||||
## Installation Method 3: Other Deployment Methods
|
||||
|
||||
1. Развертывание на удаленном облачном сервере
|
||||
Пожалуйста, посетите [Deploy Wiki-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
1. How to use reverse proxy URL/Microsoft Azure API
|
||||
Configure API_URL_REDIRECT according to the instructions in `config.py`.
|
||||
|
||||
2. Использование WSL2 (Windows Subsystem for Linux)
|
||||
Пожалуйста, посетите [Deploy Wiki-2] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
2. Remote Cloud Server Deployment (Requires Knowledge and Experience of Cloud Servers)
|
||||
Please visit [Deployment Wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
|
||||
3. Using WSL2 (Windows Subsystem for Linux subsystem)
|
||||
Please visit [Deployment Wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
|
||||
## Установка-Настройки прокси
|
||||
### Метод 1: Обычный способ
|
||||
[Конфигурация прокси] (https://github.com/binary-husky/chatgpt_academic/issues/1)
|
||||
|
||||
### Метод 2: Руководство новичка
|
||||
[Руководство новичка] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
|
||||
4. How to run at the secondary URL (such as `http://localhost/subpath`)
|
||||
Please visit [FastAPI Operation Instructions](docs/WithFastapi.md)
|
||||
|
||||
5. Using docker-compose to run
|
||||
Please read docker-compose.yml and follow the prompts to operate.
|
||||
|
||||
---
|
||||
# Advanced Usage
|
||||
## Customize new convenient buttons / custom function plugins
|
||||
|
||||
## Настройка новой удобной кнопки (настройка быстрой клавиши для научной работы)
|
||||
Откройте `core_functional.py` любым текстовым редактором, добавьте элементы, как показано ниже, затем перезапустите программу. (Если кнопка уже успешно добавлена и видна, то префикс и суффикс поддерживают горячее изменение, чтобы они оказались в действии, не нужно перезапускать программу.)
|
||||
например
|
||||
1. Customize new convenient buttons (academic shortcuts)
|
||||
Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, both prefixes and suffixes can be hot-modified without having to restart the program.)
|
||||
For example:
|
||||
```
|
||||
"Супер анг-рус": {
|
||||
# Префикс, будет добавлен перед вашим вводом. Например, используется для описания ваших потребностей, таких как перевод, кодинг, редактирование и т. д.
|
||||
"Prefix": "Пожалуйста, переведите этот фрагмент на русский язык, а затем создайте пошаговую таблицу в markdown, чтобы объяснить все специализированные термины, которые встречаются в тексте:\n\n",
|
||||
"Super English to Chinese": {
|
||||
# Prefix, will be added before your input. For example, describe your requirements, such as translation, code interpretation, polishing, etc.
|
||||
"Prefix": "Please translate the following content into Chinese, and then explain each proper noun that appears in the text with a markdown table:\n\n",
|
||||
|
||||
# Суффикс, будет добавлен после вашего ввода. Например, совместно с префиксом можно обрамить ваш ввод в кавычки.
|
||||
# Suffix, will be added after your input. For example, with the prefix, you can enclose your input content in quotes.
|
||||
"Suffix": "",
|
||||
},
|
||||
```
|
||||
@@ -207,85 +200,79 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. Custom function plugin
|
||||
|
||||
Write powerful function plugins to perform any task you can and can't imagine.
|
||||
The difficulty of debugging and writing plugins in this project is very low. As long as you have a certain knowledge of python, you can implement your own plugin function by imitating the template we provide.
|
||||
Please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) for details.
|
||||
|
||||
---
|
||||
# Latest Update
|
||||
## New feature dynamic
|
||||
|
||||
1. Сохранение диалогов. Вызовите "Сохранить текущий диалог" в разделе функций-плагина, чтобы сохранить текущий диалог как файл HTML, который можно прочитать и восстановить. Кроме того, вызовите «Загрузить архив истории диалога» в меню функций-плагина, чтобы восстановить предыдущую сессию. Совет: если нажать кнопку "Загрузить исторический архив диалога" без указания файла, можно просмотреть кэш исторических файлов HTML. Щелкните "Удалить все локальные записи истории диалогов", чтобы удалить все файловые кэши HTML.
|
||||
|
||||
## Демонстрация некоторых возможностей
|
||||
2. Создание отчетов. Большинство плагинов создают рабочий отчет после завершения выполнения.
|
||||
|
||||
3. Модульный дизайн функций, простой интерфейс, но сильный функционал.
|
||||
|
||||
### Отображение изображений:
|
||||
4. Это проект с открытым исходным кодом, который может «сам переводить себя».
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
|
||||
</div>
|
||||
5. Перевод других проектов с открытым исходным кодом - это не проблема.
|
||||
|
||||
6. Мелкие функции декорирования [live2d](https://github.com/fghrsh/live2d_demo) (по умолчанию отключены, нужно изменить `config.py`).
|
||||
|
||||
### Если программа может понимать и разбирать сама себя:
|
||||
7. Поддержка большой языковой модели MOSS.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
|
||||
</div>
|
||||
8. Генерация изображений с помощью OpenAI.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
|
||||
</div>
|
||||
9. Анализ и подведение итогов аудиофайлов с помощью OpenAI.
|
||||
|
||||
10. Полный цикл проверки правописания с использованием LaTeX.
|
||||
|
||||
### Анализ других проектов на Python/Cpp:
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
|
||||
</div>
|
||||
## Версии:
|
||||
- Версия 3.5 (Todo): использование естественного языка для вызова функций-плагинов проекта (высокий приоритет)
|
||||
- Версия 3.4 (Todo): улучшение многопоточной поддержки локальных больших моделей чата.
|
||||
- Версия 3.3: добавлена функция объединения интернет-информации.
|
||||
- Версия 3.2: функции-плагины поддерживают большое количество параметров (сохранение диалогов, анализирование любого языка программирования и одновременное запрос LLM-групп).
|
||||
- Версия 3.1: поддержка одновременного запроса нескольких моделей GPT! Поддержка api2d, сбалансированное распределение нагрузки по нескольким ключам api.
|
||||
- Версия 3.0: поддержка chatglm и других небольших LLM.
|
||||
- Версия 2.6: перестройка структуры плагинов, улучшение интерактивности, добавлено больше плагинов.
|
||||
- Версия 2.5: автоматическое обновление для решения проблемы длинного текста и переполнения токенов при обработке больших проектов.
|
||||
- Версия 2.4: (1) добавлена функция полного перевода PDF; (2) добавлена функция переключения положения ввода; (3) добавлена опция вертикального макета; (4) оптимизация многопоточности плагинов.
|
||||
- Версия 2.3: улучшение многопоточной интерактивности.
|
||||
- Версия 2.2: функции-плагины поддерживают горячую перезагрузку.
|
||||
- Версия 2.1: раскрывающийся макет.
|
||||
- Версия 2.0: использование модульных функций-плагинов.
|
||||
- Версия 1.0: базовые функции.
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
|
||||
</div>
|
||||
gpt_academic Разработчик QQ-группы-2: 610599535
|
||||
|
||||
### Генерация понимания и абстрактов с помощью Латех статей в один клик
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
|
||||
</div>
|
||||
- Известные проблемы
|
||||
- Некоторые плагины перевода в браузерах мешают работе фронтенда этого программного обеспечения
|
||||
- Высокая или низкая версия gradio может вызвать множество исключений
|
||||
|
||||
### Автоматическое создание отчетов
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
|
||||
</div>
|
||||
|
||||
### Модульный дизайн функций
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
|
||||
</div>
|
||||
|
||||
|
||||
### Трансляция исходного кода на английский язык
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
|
||||
</div>
|
||||
|
||||
## Todo и планирование версий:
|
||||
- version 3.2+ (todo): функция плагины поддерживают более многочисленные интерфейсы параметров
|
||||
- version 3.1: поддержка одновременного опроса нескольких моделей gpt! Поддержка api2d, поддержка балансировки нагрузки множества apikey.
|
||||
- version 3.0: поддержка chatglm и других маленьких llm
|
||||
- version 2.6: реструктурировал структуру плагинов, повысил интерактивность, добавил больше плагинов
|
||||
- version 2.5: само обновление, решение проблемы слишком длинного текста и переполнения токена при переводе всего проекта исходного кода
|
||||
- version 2.4: (1) добавлена функция перевода всего PDF-документа; (2) добавлена функция изменения положения входной области; (3) добавлена опция вертикального макета; (4) оптимизация функций многопоточности плагина.
|
||||
- version 2.3: улучшение многопоточной интерактивности
|
||||
- version 2.2: функция плагинов поддерживает горячую перезагрузку
|
||||
- version 2.1: блочная раскладка
|
||||
- version 2.0: модульный дизайн функций плагина
|
||||
- version 1.0: основные функции
|
||||
|
||||
## Ссылки на изучение и обучение
|
||||
## Ссылки и учебные материалы
|
||||
|
||||
```
|
||||
В коде использовано много хороших дизайнерских решений из других отличных проектов, в том числе:
|
||||
Мы использовали многие концепты кода из других отличных проектов, включая:
|
||||
|
||||
# Project1: использование многих приемов из ChuanhuChatGPT
|
||||
# Проект 1: Qinghua ChatGLM-6B:
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
|
||||
# Проект 2: Qinghua JittorLLMs:
|
||||
https://github.com/Jittor/JittorLLMs
|
||||
|
||||
# Проект 3: Edge-GPT:
|
||||
https://github.com/acheong08/EdgeGPT
|
||||
|
||||
# Проект 4: Chuanhu ChatGPT:
|
||||
https://github.com/GaiZhenbiao/ChuanhuChatGPT
|
||||
|
||||
# Project2: ChatGLM-6B в Тхуде:
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
```
|
||||
# Проект 5: ChatPaper:
|
||||
https://github.com/kaixindelele/ChatPaper
|
||||
|
||||
# Больше:
|
||||
https://github.com/gradio-app/gradio
|
||||
https://github.com/fghrsh/live2d_demo
|
||||
```
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user