Compare commits
1211 Commits
version3.4
...
0055ea2df7
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0055ea2df7 | ||
|
|
4a79aa6a93 | ||
|
|
5dffe8627f | ||
|
|
2aefef26db | ||
|
|
957da731db | ||
|
|
155e7e0deb | ||
|
|
add29eba08 | ||
|
|
163e59c0f3 | ||
|
|
07ece29c7c | ||
|
|
991a903fa9 | ||
|
|
cf7c81170c | ||
|
|
6dda2061dd | ||
|
|
e9de41b7e8 | ||
|
|
b34c79a94b | ||
|
|
8a0d96afd3 | ||
|
|
37f9b94dee | ||
|
|
95284d859b | ||
|
|
a552592b5a | ||
|
|
e305f1b4a8 | ||
| a88497c3ab | |||
|
|
0f1d2e0e48 | ||
|
|
936e2f5206 | ||
|
|
7f4b87a633 | ||
|
|
2ddd1bb634 | ||
|
|
c68285aeac | ||
|
|
caaebe4296 | ||
|
|
39d50c1c95 | ||
|
|
25dc7bf912 | ||
|
|
0458590a77 | ||
|
|
44fe78fff5 | ||
|
|
5ddd657ebc | ||
|
|
9b0b2cf260 | ||
|
|
9f39a6571a | ||
|
|
d07e736214 | ||
|
|
a1f7ae5b55 | ||
|
|
1213ef19e5 | ||
|
|
aaafe2a797 | ||
|
|
2716606f0c | ||
|
|
286f7303be | ||
|
|
7eeab9e376 | ||
|
|
4ca331fb28 | ||
|
|
9487829930 | ||
|
|
a73074b89e | ||
|
|
fd93622840 | ||
|
|
09a82a572d | ||
|
|
c53ddf65aa | ||
|
|
ac64a77c2d | ||
|
|
dae8a0affc | ||
|
|
97a81e9388 | ||
|
|
1dd1d0ed6c | ||
|
|
060af0d2e6 | ||
|
|
a848f714b6 | ||
|
|
924f8e30c7 | ||
|
|
f40347665b | ||
|
|
734c40bbde | ||
|
|
4ec87fbb54 | ||
|
|
17b5c22e61 | ||
|
|
c6cd04a407 | ||
|
|
f60a12f8b4 | ||
|
|
8413fb15ba | ||
|
|
72b2ce9b62 | ||
|
|
f43ef909e2 | ||
|
|
9651ad488f | ||
|
|
81da7bb1a5 | ||
|
|
4127162ee7 | ||
|
|
98e5cb7b77 | ||
|
|
c88d8047dd | ||
|
|
e4bebea28d | ||
|
|
294df6c2d5 | ||
|
|
239894544e | ||
|
|
ed5fc84d4e | ||
|
|
e3f84069ee | ||
|
|
7bf094b6b6 | ||
|
|
57d7dc33d3 | ||
|
|
94ccd77480 | ||
|
|
48e53cba05 | ||
|
|
e9a7f9439f | ||
|
|
a88b119bf0 | ||
|
|
eee8115434 | ||
|
|
4f6a272113 | ||
|
|
cf3dd5ddb6 | ||
|
|
f6f10b7230 | ||
|
|
bd7b219e8f | ||
|
|
e62decac21 | ||
|
|
588b22e039 | ||
|
|
ef18aeda81 | ||
|
|
3520131ca2 | ||
|
|
ff5901d8c0 | ||
|
|
2305576410 | ||
|
|
52f23c505c | ||
|
|
34cc484635 | ||
|
|
d152f62894 | ||
|
|
ca35f56f9b | ||
|
|
d616fd121a | ||
|
|
f3fda0d3fc | ||
|
|
197287fc30 | ||
|
|
c37fcc9299 | ||
|
|
91f5e6b8f7 | ||
|
|
4f0851f703 | ||
|
|
2821f27756 | ||
|
|
8f91a048a8 | ||
|
|
58eac38b4d | ||
|
|
180550b8f0 | ||
|
|
7497dcb852 | ||
|
|
23ef2ffb22 | ||
|
|
848d0f65c7 | ||
|
|
f0b0364f74 | ||
|
|
d7f0cbe68e | ||
|
|
69f3755682 | ||
|
|
04c9077265 | ||
|
|
6afd7db1e3 | ||
|
|
4727113243 | ||
|
|
42d10a9481 | ||
|
|
50a1ea83ef | ||
|
|
a9c86a7fb8 | ||
|
|
2b299cf579 | ||
|
|
310122f5a7 | ||
|
|
0121cacc84 | ||
|
|
c83bf214d0 | ||
|
|
e34c49dce5 | ||
|
|
f2dcd6ad55 | ||
|
|
42d9712f20 | ||
|
|
3890467c84 | ||
|
|
074b3c9828 | ||
|
|
b8e8457a01 | ||
|
|
2c93a24d7e | ||
|
|
e9af6ef3a0 | ||
|
|
5ae8981dbb | ||
|
|
7f0ffa58f0 | ||
|
|
adbed044e4 | ||
|
|
2fe5febaf0 | ||
|
|
5888d038aa | ||
|
|
ee8213e936 | ||
|
|
a57dcbcaeb | ||
|
|
b812392a9d | ||
|
|
f54d8e559a | ||
|
|
fce4fa1ec7 | ||
|
|
d13f1e270c | ||
|
|
85cf3d08eb | ||
|
|
584e747565 | ||
|
|
02ba653c19 | ||
|
|
e68fc2bc69 | ||
|
|
f695d7f1da | ||
|
|
2d12b5b27d | ||
|
|
679352d896 | ||
|
|
12c9ab1e33 | ||
|
|
a4bcd262f9 | ||
|
|
da4a5efc49 | ||
|
|
9ac450cfb6 | ||
|
|
172f9e220b | ||
|
|
748e31102f | ||
|
|
a28b7d8475 | ||
|
|
7d3ed36899 | ||
|
|
a7bc5fa357 | ||
|
|
4f5dd9ebcf | ||
|
|
427feb99d8 | ||
|
|
a01ca93362 | ||
|
|
97eef45ab7 | ||
|
|
0c0e2acb9b | ||
|
|
9fba8e0142 | ||
|
|
7d7867fb64 | ||
|
|
7ea791d83a | ||
|
|
f9dbaa39fb | ||
|
|
bbc2288c5b | ||
|
|
64ab916838 | ||
|
|
8fe559da9f | ||
|
|
09fd22091a | ||
|
|
df717f8bba | ||
|
|
e296719b23 | ||
|
|
2f343179a2 | ||
|
|
4d9604f2e9 | ||
|
|
597c320808 | ||
|
|
18290fd138 | ||
|
|
bbf9e9f868 | ||
|
|
0d0575a639 | ||
|
|
aa1f967dd7 | ||
|
|
0d082327c8 | ||
|
|
80acd9c875 | ||
|
|
17cd4f8210 | ||
|
|
4e041e1d4e | ||
|
|
7ef39770c7 | ||
|
|
8222f638cf | ||
|
|
ab32c314ab | ||
|
|
dcfed97054 | ||
|
|
dd66ca26f7 | ||
|
|
8b91d2ac0a | ||
|
|
e4e00b713f | ||
|
|
710a65522c | ||
|
|
34784c1d40 | ||
|
|
80b1a6f99b | ||
|
|
08c3c56f53 | ||
|
|
294716c832 | ||
|
|
16f4fd636e | ||
|
|
e07caf7a69 | ||
|
|
a95b3daab9 | ||
|
|
4873e9dfdc | ||
|
|
a119ab36fe | ||
|
|
f9384e4e5f | ||
|
|
6fe5f6ee6e | ||
|
|
068d753426 | ||
|
|
5010537f3c | ||
|
|
f35f6633e0 | ||
|
|
573dc4d184 | ||
|
|
da8b2d69ce | ||
|
|
58e732c26f | ||
|
|
ca238daa8c | ||
|
|
60b3491513 | ||
|
|
c1175bfb7d | ||
|
|
b705afd5ff | ||
|
|
dfcd28abce | ||
|
|
1edaa9e234 | ||
|
|
f0cd617ec2 | ||
|
|
0b08bb2cea | ||
|
|
d1f8607ac8 | ||
|
|
7eb68a2086 | ||
|
|
ee9e99036a | ||
|
|
55e255220b | ||
|
|
019cd26ae8 | ||
|
|
a5b21d5cc0 | ||
|
|
ce940ff70f | ||
|
|
fc6a83c29f | ||
|
|
1d3212e367 | ||
|
|
8a835352a3 | ||
|
|
5456c9fa43 | ||
|
|
ea67054c30 | ||
|
|
1084108df6 | ||
|
|
40c9700a8d | ||
|
|
6da5623813 | ||
|
|
778c9cd9ec | ||
|
|
e290317146 | ||
|
|
85b92b7f07 | ||
|
|
ff899777ce | ||
|
|
c1b8c773c3 | ||
|
|
8747c48175 | ||
|
|
c0010c88bc | ||
|
|
68838da8ad | ||
|
|
ca7de8fcdd | ||
|
|
7ebc2d00e7 | ||
|
|
47fb81cfde | ||
|
|
83961c1002 | ||
|
|
a8621333af | ||
|
|
f402ef8134 | ||
|
|
65d0f486f1 | ||
|
|
41f25a6a9b | ||
|
|
4a6a032334 | ||
|
|
f945a7bd19 | ||
|
|
379dcb2fa7 | ||
|
|
114192e025 | ||
|
|
30c905917a | ||
|
|
0c6c357e9c | ||
|
|
9d11b17f25 | ||
|
|
1d9e9fa6a1 | ||
|
|
6cd2d80dfd | ||
|
|
18d3245fc9 | ||
|
|
194e665a3b | ||
|
|
7e201c5028 | ||
|
|
6babcb4a9c | ||
|
|
00e5a31b50 | ||
|
|
d8b9686eeb | ||
|
|
b7b4e201cb | ||
|
|
26e7677dc3 | ||
|
|
25e06de1b6 | ||
|
|
5e64a50898 | ||
|
|
0ad571e6b5 | ||
|
|
60a42fb070 | ||
|
|
ddad5247fc | ||
|
|
c94d5054a2 | ||
|
|
ececfb9b6e | ||
|
|
9f13c5cedf | ||
|
|
68b36042ce | ||
|
|
cac6c50d2f | ||
|
|
f884eb43cf | ||
|
|
d37383dd4e | ||
|
|
dfae4e8081 | ||
|
|
15cc08505f | ||
|
|
c5a82f6ab7 | ||
|
|
768ed4514a | ||
|
|
9dfbff7fd0 | ||
|
|
47cedde954 | ||
|
|
1e16485087 | ||
|
|
f3660d669f | ||
|
|
e6d1cb09cb | ||
|
|
12aebf9707 | ||
|
|
0b5385e5e5 | ||
|
|
2ff1a1fb0b | ||
|
|
cdadd38cf7 | ||
|
|
48e10fb10a | ||
|
|
ba484c55a0 | ||
|
|
ca64a592f5 | ||
|
|
cb96ca132a | ||
|
|
737101b81d | ||
|
|
612caa2f5f | ||
|
|
85dbe4a4bf | ||
|
|
2262a4d80a | ||
|
|
b456ff02ab | ||
|
|
24a21ae320 | ||
|
|
3d5790cc2c | ||
|
|
7de6015800 | ||
|
|
46428b7c7a | ||
|
|
66a50c8019 | ||
|
|
814dc943ac | ||
|
|
96cd1f0b25 | ||
|
|
4fc17f4add | ||
|
|
b3665d8fec | ||
|
|
80c4281888 | ||
|
|
beda56abb0 | ||
|
|
cb16941d01 | ||
|
|
5cf9ac7849 | ||
|
|
51ddb88ceb | ||
|
|
69dfe5d514 | ||
|
|
6819f87512 | ||
|
|
3d51b9d5bb | ||
|
|
bff87ada92 | ||
|
|
a938412b6f | ||
|
|
a48acf6fec | ||
|
|
c6b9ab5214 | ||
|
|
aa3332de69 | ||
|
|
d43175d46d | ||
|
|
8ca9232db2 | ||
|
|
1339aa0e1a | ||
|
|
f41419e767 | ||
|
|
d88c585305 | ||
|
|
0a88d18c7a | ||
|
|
0d0edc2216 | ||
|
|
5e0875fcf4 | ||
|
|
c508b84db8 | ||
|
|
f2b67602bb | ||
|
|
29daba5d2f | ||
|
|
9477824ac1 | ||
|
|
459c5b2d24 | ||
|
|
abf9b5aee5 | ||
|
|
2ce4482146 | ||
|
|
4282b83035 | ||
|
|
537be57c9b | ||
|
|
3aa92d6c80 | ||
|
|
b7eb9aba49 | ||
|
|
881a596a30 | ||
|
|
1b3c331d01 | ||
|
|
70d5f2a7df | ||
|
|
fd2f8b9090 | ||
|
|
225a2de011 | ||
|
|
6aea6d8e2b | ||
|
|
8d85616c27 | ||
|
|
e4533dd24d | ||
|
|
43ed8cb8a8 | ||
|
|
3eff964424 | ||
|
|
ebde98b34b | ||
|
|
6f883031c0 | ||
|
|
fa15059f07 | ||
|
|
685c573619 | ||
|
|
5fcd02506c | ||
|
|
bd5280df1b | ||
|
|
744759704d | ||
|
|
81df0aa210 | ||
|
|
cadaa81030 | ||
|
|
3b6cbbdcb0 | ||
|
|
52e49c48b8 | ||
|
|
6ad15a6129 | ||
|
|
09990d44d3 | ||
|
|
eac5191815 | ||
|
|
ae4407135d | ||
|
|
f0e15bd710 | ||
|
|
5c5f442649 | ||
|
|
160552cc5f | ||
|
|
c131ec0b20 | ||
|
|
2f3aeb7976 | ||
|
|
eff5b89b98 | ||
|
|
f77ab27bc9 | ||
|
|
ba0a8b7072 | ||
|
|
2406022c2a | ||
|
|
02b6f26b05 | ||
|
|
2a003e8d49 | ||
|
|
21891b0f6d | ||
|
|
163f12c533 | ||
|
|
bdd46c5dd1 | ||
|
|
ae51a0e686 | ||
|
|
f2582ea137 | ||
|
|
ddd2fd84da | ||
|
|
6c90ff80ea | ||
|
|
cb7c0703be | ||
|
|
5181cd441d | ||
|
|
216d4374e7 | ||
|
|
8af6c0cab6 | ||
|
|
67ad041372 | ||
|
|
725c72229c | ||
|
|
e42ede512b | ||
|
|
84ccc9e64c | ||
|
|
c172847e19 | ||
|
|
d166d25eb4 | ||
|
|
516bbb1331 | ||
|
|
c3140ce344 | ||
|
|
cd18663800 | ||
|
|
dbf1322836 | ||
|
|
98dd3ae1c0 | ||
|
|
3036709496 | ||
|
|
8e9c07644f | ||
|
|
90d96b77e6 | ||
|
|
66c876a9ca | ||
|
|
0665eb75ed | ||
|
|
6b784035fa | ||
|
|
8bb3d84912 | ||
|
|
a0193cf227 | ||
|
|
b72289bfb0 | ||
|
|
bdfe3862eb | ||
|
|
dae180b9ea | ||
|
|
e359fff040 | ||
|
|
2e9b4a5770 | ||
|
|
e0c5859cf9 | ||
|
|
b9b1e12dc9 | ||
|
|
8814026ec3 | ||
|
|
3025d5be45 | ||
|
|
6c13bb7b46 | ||
|
|
c27e559f10 | ||
|
|
cdb5288f49 | ||
|
|
49c6fcfe97 | ||
|
|
45fa0404eb | ||
|
|
f889ef7625 | ||
|
|
a93bf4410d | ||
|
|
1c0764753a | ||
|
|
c847209ac9 | ||
|
|
4f9d40c14f | ||
|
|
91926d24b7 | ||
|
|
ef311c4859 | ||
|
|
82795d3817 | ||
|
|
49e28a5a00 | ||
|
|
01def2e329 | ||
|
|
2291be2b28 | ||
|
|
c89ec7969f | ||
|
|
1506c19834 | ||
|
|
a6fdc493b7 | ||
|
|
113067c6ab | ||
|
|
7b6828ab07 | ||
|
|
d818c38dfe | ||
|
|
08b4e9796e | ||
|
|
b55d573819 | ||
|
|
06b0e800a2 | ||
|
|
7bbaf05961 | ||
|
|
3b83279855 | ||
|
|
37164a826e | ||
|
|
dd2a97e7a9 | ||
|
|
e579006c4a | ||
|
|
031f19b6dd | ||
|
|
142b516749 | ||
|
|
f2e73aa580 | ||
|
|
8565a35cf7 | ||
|
|
72d78eb150 | ||
|
|
7aeda537ac | ||
|
|
6cea17d4b7 | ||
|
|
20bc51d747 | ||
|
|
b8ebefa427 | ||
|
|
dcc9326f0b | ||
|
|
94fc396eb9 | ||
|
|
e594e1b928 | ||
|
|
8fe545d97b | ||
|
|
6f978fa72e | ||
|
|
19be471aa8 | ||
|
|
38956934fd | ||
|
|
32439e14b5 | ||
|
|
317389bf4b | ||
|
|
2c740fc641 | ||
|
|
96832a8228 | ||
|
|
361557da3c | ||
|
|
5f18d4a1af | ||
|
|
0d10bc570f | ||
|
|
3ce7d9347d | ||
|
|
8a78d7b89f | ||
|
|
0e43b08837 | ||
|
|
74bced2d35 | ||
|
|
961a24846f | ||
|
|
b7e4744f28 | ||
|
|
71adc40901 | ||
|
|
a2099f1622 | ||
|
|
c0a697f6c8 | ||
|
|
bdde1d2fd7 | ||
|
|
63373ab3b6 | ||
|
|
fb6566adde | ||
|
|
9f2ef9ec49 | ||
|
|
35c1aa21e4 | ||
|
|
627d739720 | ||
|
|
37f15185b6 | ||
|
|
9643e1c25f | ||
|
|
28eae2f80e | ||
|
|
7ab379688e | ||
|
|
3d4c6f54f1 | ||
|
|
1714116a89 | ||
|
|
2bc65a99ca | ||
|
|
0a2805513e | ||
|
|
d698b96209 | ||
|
|
6b1c6f0bf7 | ||
|
|
c22867b74c | ||
|
|
2abe665521 | ||
|
|
b0e6c4d365 | ||
|
|
d883c7f34b | ||
|
|
aba871342f | ||
|
|
37744a9cb1 | ||
|
|
480516380d | ||
|
|
60ba712131 | ||
|
|
a7c960dcb0 | ||
|
|
a96f842b3a | ||
|
|
417ca91e23 | ||
|
|
ef8fadfa18 | ||
|
|
865c4ca993 | ||
|
|
31304f481a | ||
|
|
1bd3637d32 | ||
|
|
160a683667 | ||
|
|
49ca03ca06 | ||
|
|
c625348ce1 | ||
|
|
6d4a74893a | ||
|
|
5c7499cada | ||
|
|
f522691529 | ||
|
|
ca85573ec1 | ||
|
|
2c7bba5c63 | ||
|
|
e22f0226d5 | ||
|
|
0f250305b4 | ||
|
|
7606f5c130 | ||
|
|
4f0dcc431c | ||
|
|
6ca0dd2f9e | ||
|
|
e3e9921f6b | ||
|
|
867ddd355e | ||
|
|
bb431db7d3 | ||
|
|
43568b83e1 | ||
|
|
2b90302851 | ||
|
|
f7588d4776 | ||
|
|
a0bfa7ba1c | ||
|
|
c60a7452bf | ||
|
|
68a49d3758 | ||
|
|
ac3d4cf073 | ||
|
|
9479dd984c | ||
|
|
3c271302cc | ||
|
|
6e9936531d | ||
|
|
439147e4b7 | ||
|
|
8d13821099 | ||
|
|
49fe06ed69 | ||
|
|
7882ce7304 | ||
|
|
dc68e601a5 | ||
|
|
d169fb4b16 | ||
|
|
36e19d5202 | ||
|
|
c5f1e4e392 | ||
|
|
d3f7267a63 | ||
|
|
f4127a9c9c | ||
|
|
c181ad38b4 | ||
|
|
107944f5b7 | ||
|
|
8c7569b689 | ||
|
|
fa374bf1fc | ||
|
|
c0a36e37be | ||
|
|
2f2b869efd | ||
|
|
2f148bada0 | ||
|
|
916b2e8aa7 | ||
|
|
0cb7dd5280 | ||
|
|
892ccb14c7 | ||
|
|
21bccf69d2 | ||
|
|
7bac8f4bd3 | ||
|
|
d0c2923ab1 | ||
|
|
8a6e96c369 | ||
|
|
49f3fcf2c0 | ||
|
|
2b96a60b76 | ||
|
|
ec60a85cac | ||
|
|
647d9f88db | ||
|
|
b0c627909a | ||
|
|
102bf2f1eb | ||
|
|
26291b33d1 | ||
|
|
4f04d810b7 | ||
|
|
6d2f126253 | ||
|
|
e5b296d221 | ||
|
|
7933675c12 | ||
|
|
692ff4b59c | ||
|
|
4d8b535c79 | ||
|
|
3c03f240ba | ||
|
|
9bfc3400f9 | ||
|
|
95504f0bb7 | ||
|
|
0cd3274d04 | ||
|
|
2cef81abbe | ||
|
|
6f9bc5d206 | ||
|
|
94ab41d3c0 | ||
|
|
da376068e1 | ||
|
|
552219fd5a | ||
|
|
4985986243 | ||
|
|
d99b443b4c | ||
|
|
2aab6cb708 | ||
|
|
1134723c80 | ||
|
|
6126024f2c | ||
|
|
ef12d4f754 | ||
|
|
e8dd3c02f2 | ||
|
|
e7f4c804eb | ||
|
|
3d6ee5c755 | ||
|
|
d8958da8cd | ||
|
|
a64d550045 | ||
|
|
d876a81e78 | ||
|
|
6723eb77b2 | ||
|
|
86891e3535 | ||
|
|
2f805db35d | ||
|
|
ecaf2bdf45 | ||
|
|
22e00eb1c5 | ||
|
|
900fad69cf | ||
|
|
55d807c116 | ||
|
|
9a0ed248ca | ||
|
|
88802b0f72 | ||
|
|
5720ac127c | ||
|
|
f44642d9d2 | ||
|
|
29775dedd8 | ||
|
|
6417ca9dde | ||
|
|
f417c1ce6d | ||
|
|
e4c057f5a3 | ||
|
|
f9e9b6f4ec | ||
|
|
c141e767c6 | ||
|
|
17f361d63b | ||
|
|
8780fe29f1 | ||
|
|
d57bb8afbe | ||
|
|
d39945c415 | ||
|
|
688df6aa24 | ||
|
|
b24fef8a61 | ||
|
|
8c840f3d4c | ||
|
|
577d3d566b | ||
|
|
fd92766083 | ||
|
|
2d2e02040d | ||
|
|
aee57364dd | ||
|
|
7ca37c4831 | ||
|
|
5b06a6cae5 | ||
|
|
5d5695cd9a | ||
|
|
fd72894c90 | ||
|
|
c1abec2e4b | ||
|
|
9916f59753 | ||
|
|
e6716ccf63 | ||
|
|
e533ed6d12 | ||
|
|
4fefbb80ac | ||
|
|
1253a2b0a6 | ||
|
|
71537b570f | ||
|
|
203d5f7296 | ||
|
|
7754215dad | ||
|
|
b470af7c7b | ||
|
|
f8c5f9045d | ||
|
|
c7a0a5f207 | ||
|
|
b1be05009b | ||
|
|
977f992e3a | ||
|
|
cdca36f5d2 | ||
|
|
6ed88fe848 | ||
|
|
74f70305b7 | ||
|
|
b506c06542 | ||
|
|
e5cd66a2f7 | ||
|
|
2199cd263c | ||
|
|
47fe06f79d | ||
|
|
75a84d3cec | ||
|
|
ea4e03b1d8 | ||
|
|
aa341fd268 | ||
|
|
c4aefc5fac | ||
|
|
e7c662a5d6 | ||
|
|
5caeb7525d | ||
|
|
f495bb154e | ||
|
|
4d1657a531 | ||
|
|
5391cb4198 | ||
|
|
1b28ae3baa | ||
|
|
518a1b2b75 | ||
|
|
443915b6d6 | ||
|
|
371158cb56 | ||
|
|
1fa296a303 | ||
|
|
b0c34a89cd | ||
|
|
2003afe27f | ||
|
|
682898a3ba | ||
|
|
9a21e13d33 | ||
|
|
f03aa8713d | ||
|
|
7b526cf74b | ||
|
|
27db900692 | ||
|
|
b9b7bf38ab | ||
|
|
7e56ace2c0 | ||
|
|
67a98de841 | ||
|
|
4306f8fd3e | ||
|
|
69f37df356 | ||
|
|
94ecbde198 | ||
|
|
51c70e9e47 | ||
|
|
c45336a3cd | ||
|
|
f34f1091c3 | ||
|
|
899bbe9229 | ||
|
|
eeb70e966c | ||
|
|
1335da4f45 | ||
|
|
2d91e438d6 | ||
|
|
a55bc0c07c | ||
|
|
f7f6db831b | ||
|
|
a655ce1f00 | ||
|
|
28119e343c | ||
|
|
f75e39dc27 | ||
|
|
e4409b94d1 | ||
|
|
2570e4b997 | ||
|
|
2b917edf26 | ||
|
|
fcf04554c6 | ||
|
|
107ea868e1 | ||
|
|
da7c03e868 | ||
|
|
42339a3e6b | ||
|
|
362b545a45 | ||
|
|
84b45dc4fb | ||
|
|
f9fc02948a | ||
|
|
0299b0f95f | ||
|
|
33bf795c66 | ||
|
|
caf45ef740 | ||
|
|
b49b272587 | ||
|
|
a1a91c25a5 | ||
|
|
2912eaf082 | ||
|
|
795de492fe | ||
|
|
0ff750b60a | ||
|
|
8ad2a2bb86 | ||
|
|
12df41563a | ||
|
|
8d94564e67 | ||
|
|
736f1214ee | ||
|
|
e9cf3d3d12 | ||
|
|
996057e588 | ||
|
|
804599bbc3 | ||
|
|
ffe6c1403e | ||
|
|
3a2466fe4e | ||
|
|
6c795809f7 | ||
|
|
3141cd392a | ||
|
|
77220002e0 | ||
|
|
cd40bf9ae2 | ||
|
|
6c3405ba55 | ||
|
|
bba3419ace | ||
|
|
61cf2b32eb | ||
|
|
3ed0e8012d | ||
|
|
4d9256296d | ||
|
|
0897057be1 | ||
|
|
136e6aaa21 | ||
|
|
8e375b0ed2 | ||
|
|
5192d316f0 | ||
|
|
245585be81 | ||
|
|
4824905592 | ||
|
|
5566ba8257 | ||
|
|
8c4a753b65 | ||
|
|
f016323b8a | ||
|
|
cd9f2ec402 | ||
|
|
ca7ff47fcb | ||
|
|
09857ea455 | ||
|
|
17cf47dcd6 | ||
|
|
136162ec0d | ||
|
|
08f036aafd | ||
|
|
9fb29f249b | ||
|
|
9a1aff5bb6 | ||
|
|
f3f90f7b90 | ||
|
|
527f9d28ad | ||
|
|
12b2a229b6 | ||
|
|
40a065ce04 | ||
|
|
b14d4de0b1 | ||
|
|
e64c26e617 | ||
|
|
0b1e599b01 | ||
|
|
127385b846 | ||
|
|
cf085565a7 | ||
|
|
5a530df4f2 | ||
|
|
b4c7b26f63 | ||
|
|
8bdcc4ff28 | ||
|
|
e596bb6fff | ||
|
|
50ecb45d63 | ||
|
|
349c399967 | ||
|
|
103d05d242 | ||
|
|
d0589209cc | ||
|
|
8faf69c41e | ||
|
|
f7a332eee7 | ||
|
|
f6e34d9621 | ||
|
|
706a239232 | ||
|
|
00076cc6f4 | ||
|
|
a711db0b5b | ||
|
|
5dd3f4ad6d | ||
|
|
65e202881a | ||
|
|
27c4e3ef4f | ||
|
|
e2b3c47186 | ||
|
|
a14ef78d52 | ||
|
|
b88e577eb5 | ||
|
|
991e41b313 | ||
|
|
ff2bc64d57 | ||
|
|
218f0c445e | ||
|
|
7ee0c94924 | ||
|
|
3531e7f23f | ||
|
|
d99f4681f0 | ||
|
|
f2b2ccd577 | ||
|
|
c18a235d33 | ||
|
|
6c87c55a8a | ||
|
|
f925fe7692 | ||
|
|
af83c43fb0 | ||
|
|
4305ee0313 | ||
|
|
a6e7bbbd22 | ||
|
|
62c02dfa86 | ||
|
|
a2ebbafb77 | ||
|
|
a915a2ddd1 | ||
|
|
537c15b354 | ||
|
|
73ed92af59 | ||
|
|
88303b6f78 | ||
|
|
120d4ad556 | ||
|
|
3410bd9b1d | ||
|
|
20e3eee6e7 | ||
|
|
775b07dbcc | ||
|
|
560d4e2cb1 | ||
|
|
4ad432e1da | ||
|
|
32ddcd067a | ||
|
|
98ef658307 | ||
|
|
1e2bcb8189 | ||
|
|
a4de91d000 | ||
|
|
1bb437a5d0 | ||
|
|
4421219c2b | ||
|
|
ea28db855d | ||
|
|
5aea7b3d09 | ||
|
|
5274117cf1 | ||
|
|
673faf8cef | ||
|
|
130ae31d55 | ||
|
|
c3abc46d4d | ||
|
|
4df75d49ad | ||
|
|
9ea0fe4de2 | ||
|
|
8698c5a80f | ||
|
|
383f7f4f77 | ||
|
|
34d784df79 | ||
|
|
662bebfc02 | ||
|
|
0c3b00fc6b | ||
|
|
b6e370e8c9 | ||
|
|
71ea8e584a | ||
|
|
a5491b9199 | ||
|
|
6f383c1dc8 | ||
|
|
500a0cbd16 | ||
|
|
1ef6730369 | ||
|
|
491174095a | ||
|
|
02c270410c | ||
|
|
89eec21f27 | ||
|
|
49cea97822 | ||
|
|
6310b65d70 | ||
|
|
93c76e1809 | ||
|
|
f64cf7a3d1 | ||
|
|
fdffbee1b0 | ||
|
|
87ccd1a89a | ||
|
|
87b9734986 | ||
|
|
d2d5665c37 | ||
|
|
0844b6e9cf | ||
|
|
9cb05e5724 | ||
|
|
80b209fa0c | ||
|
|
8d4cb05738 | ||
|
|
31f4069563 | ||
|
|
8ba6fc062e | ||
|
|
c0c2d14e3d | ||
|
|
f0a5c49a9c | ||
|
|
9333570ab7 | ||
|
|
d6eaaad962 | ||
|
|
e24f077b68 | ||
|
|
dc5bb9741a | ||
|
|
b383b45191 | ||
|
|
2d8f37baba | ||
|
|
409927ef8e | ||
|
|
5b231e0170 | ||
|
|
87f629bb37 | ||
|
|
3672c97a06 | ||
|
|
b6ee3e9807 | ||
|
|
d56bc280e9 | ||
|
|
d5fd00c15d | ||
|
|
5e647ff149 | ||
|
|
868faf00cc | ||
|
|
a0286c39b9 | ||
|
|
9cced321f1 | ||
|
|
3073935e24 | ||
|
|
ef6631b280 | ||
|
|
0801e4d881 | ||
|
|
ae08cfbcae | ||
|
|
1c0d5361ea | ||
|
|
278464bfb7 | ||
|
|
2a6996f5d0 | ||
|
|
84b11016c6 | ||
|
|
7e74d3d699 | ||
|
|
2cad8e2694 | ||
|
|
e765ec1223 | ||
|
|
471a369bb8 | ||
|
|
760ff1840c | ||
|
|
9905122fc2 | ||
|
|
abea0d07ac | ||
|
|
16ff5ddcdc | ||
|
|
1c4cb340ca | ||
|
|
5ba8ea27d1 | ||
|
|
567c6530d8 | ||
|
|
a3f36668a8 | ||
|
|
a1cc2f733c | ||
|
|
0937f37388 | ||
|
|
74f35e3401 | ||
|
|
ab7999c71a | ||
|
|
544771db9a | ||
|
|
ec9d030457 | ||
|
|
14de282302 | ||
|
|
fb5467b85b | ||
|
|
c4c6465927 | ||
|
|
99a1cd6f9f | ||
|
|
7e73a255f4 | ||
|
|
4b5f13bff2 | ||
|
|
d495b73456 | ||
|
|
e699b6b13f | ||
|
|
eb150987f0 | ||
|
|
34784333dc | ||
|
|
28d777a96b | ||
|
|
c45fa88684 | ||
|
|
ad9807dd14 | ||
|
|
2a51715075 | ||
|
|
7c307d8964 | ||
|
|
baaacc5a7b | ||
|
|
6faf5947c9 | ||
|
|
571335cbc4 | ||
|
|
7d5abb6d69 | ||
|
|
a0f592308a | ||
|
|
e512d99879 | ||
|
|
e70b636513 | ||
|
|
408b8403fe | ||
|
|
74f8cb3511 | ||
|
|
2202cf3701 | ||
|
|
cce69beee9 | ||
|
|
347124c967 | ||
|
|
77a6105a9a | ||
|
|
13c9606af7 | ||
|
|
bac6810e75 | ||
|
|
c176187d24 | ||
|
|
31d5ee6ccc | ||
|
|
5e0dc9b9ad | ||
|
|
4c6f3aa427 | ||
|
|
d7331befc1 | ||
|
|
63219baa21 | ||
|
|
97cb9a4adc | ||
|
|
24f41b0a75 | ||
|
|
bfec29e9bc | ||
|
|
dd9e624761 | ||
|
|
7855325ff9 | ||
|
|
2c039ff5c9 | ||
|
|
9a5ee86434 | ||
|
|
d6698db257 | ||
|
|
b2d03bf2a3 | ||
|
|
2f83b60fb3 | ||
|
|
d183e34461 | ||
|
|
fb78569335 | ||
|
|
12c8cd75ee | ||
|
|
0e21e3e2e7 | ||
|
|
fda1e87278 | ||
|
|
1092031d77 | ||
|
|
f0482d3bae | ||
|
|
b6ac3d0d6c | ||
|
|
3344ffcb8b | ||
|
|
82936f71b6 | ||
|
|
51e809c09e | ||
|
|
713df396dc | ||
|
|
23a42d93df | ||
|
|
0ef06683dc | ||
|
|
843113ba0f | ||
|
|
79080290c6 | ||
|
|
9bd2023a8e | ||
|
|
0d6e32d31a | ||
|
|
0418257218 | ||
|
|
a3e6fc0141 | ||
|
|
1dd165a3cd | ||
|
|
e666b5269e | ||
|
|
0b70e9df7b | ||
|
|
1639796041 | ||
|
|
03164bcb6f | ||
|
|
d0af074225 | ||
|
|
6d7f3feab3 | ||
|
|
045b7f6312 | ||
|
|
116b7ce12f | ||
|
|
8b0905c076 | ||
|
|
b69140307b | ||
|
|
b31abbcad3 | ||
|
|
2d5a1fbc12 | ||
|
|
d052d425af | ||
|
|
89de49f31e | ||
|
|
a208782049 | ||
|
|
eb802ee975 | ||
|
|
f40d48b014 | ||
|
|
ef4203f5ca | ||
|
|
adf93195e8 | ||
|
|
3e5cdbaf68 | ||
|
|
27cab3b38a | ||
|
|
09d38e4abf | ||
|
|
7efb5cb6f5 | ||
|
|
31ff6e1e7a | ||
|
|
2fa3d47887 | ||
|
|
2cca46375c | ||
|
|
06410b593c | ||
|
|
545c9f47de | ||
|
|
973ad41bde | ||
|
|
3fa7416eb2 | ||
|
|
ec76d3dcc4 | ||
|
|
3f27bec94b | ||
|
|
ed11269aef | ||
|
|
6c653734ec | ||
|
|
19bd0c35ed | ||
|
|
3f4c4ebc29 | ||
|
|
6cc7d4ed69 | ||
|
|
67fff17917 | ||
|
|
8fce49fa02 | ||
|
|
30f28b37c3 | ||
|
|
6a5681dd0a | ||
|
|
dacc282763 | ||
|
|
9720bec5e5 | ||
|
|
8b3b883fce | ||
|
|
4dc0f8e57a | ||
|
|
5e48fc98ed | ||
|
|
2ff8dc787e | ||
|
|
cd38d1697c | ||
|
|
00f63cb0bc | ||
|
|
dc7fab3c19 | ||
|
|
d1b5359e2b | ||
|
|
0597ffea2e | ||
|
|
d16329c1af | ||
|
|
d5b4d7ab90 | ||
|
|
8199a9a12e | ||
|
|
cb10a8abec | ||
|
|
0dbcda89b7 | ||
|
|
78a8259b82 | ||
|
|
f22fdb4f94 | ||
|
|
450645a9d0 | ||
|
|
af23730f8f | ||
|
|
0b11260d6f | ||
|
|
31ab97dd09 | ||
|
|
c0c4834cfc | ||
|
|
2dae40f4ba | ||
|
|
587c7400d1 | ||
|
|
8dd2e2a6b7 | ||
|
|
aaf4f37403 | ||
|
|
3e2e81a968 | ||
|
|
cc1be5585b | ||
|
|
5050016b22 | ||
|
|
7662196514 | ||
|
|
8ddaca09e0 | ||
|
|
71c692dcef | ||
|
|
184e417fec | ||
|
|
7a99560183 | ||
|
|
48f4d6aa2a | ||
|
|
c17fc2a9b5 | ||
|
|
4d70b3786f | ||
|
|
9bee676cd2 | ||
|
|
0a37106692 | ||
|
|
57d4541d4e | ||
|
|
d7dd586f09 | ||
|
|
b6b53ce2a4 | ||
|
|
43809c107d | ||
|
|
1721edc990 | ||
|
|
bfb7aab4a0 | ||
|
|
f4a87d6380 | ||
|
|
c0c337988f | ||
|
|
27f65c251a | ||
|
|
87f099f740 | ||
|
|
484f16e365 | ||
|
|
37afcc709b | ||
|
|
9cbe9f240d | ||
|
|
f6567c02f6 | ||
|
|
8c83061a93 | ||
|
|
23f2adfdc3 | ||
|
|
61698444b1 | ||
|
|
109afcf8f6 | ||
|
|
19ef6a530a | ||
|
|
e08bd9669e | ||
|
|
155a7e1174 | ||
|
|
86e33ea99a | ||
|
|
524684f8bd | ||
|
|
2a362cec84 | ||
|
|
2747c23868 | ||
|
|
f446dbb62d | ||
|
|
8d37d94e2c | ||
|
|
e4ba0e6c85 | ||
|
|
4216c5196e | ||
|
|
2df660a718 | ||
|
|
bb496a9c2c | ||
|
|
4e0737c0c2 | ||
|
|
4bb3cba5c8 | ||
|
|
08b9b0d140 | ||
|
|
3577a72a3b | ||
|
|
0328d6f498 | ||
|
|
d437305a4f | ||
|
|
c4899bcb20 | ||
|
|
4295764f8c | ||
|
|
e4e2430255 | ||
|
|
1732127a28 | ||
|
|
56bb8b6498 | ||
|
|
e93b6fa3a6 | ||
|
|
dd4ba0ea22 | ||
|
|
c2701c9ce5 | ||
|
|
2f019ce359 | ||
|
|
c5b147aeb7 | ||
|
|
5813d65e52 | ||
|
|
a393edfaa4 | ||
|
|
dd7a01cda5 | ||
|
|
00a3b91f95 | ||
|
|
61ba544282 | ||
|
|
b5b8c123e4 | ||
|
|
d9ceba959f | ||
|
|
6b5b040701 | ||
|
|
4f4c09a5f3 | ||
|
|
067bc97cce | ||
|
|
7368580cd6 | ||
|
|
df90db210c | ||
|
|
0927ed20a2 | ||
|
|
73b22f85be | ||
|
|
b8d77557b0 | ||
|
|
99b8fce8f3 | ||
|
|
16364f1b2d | ||
|
|
3b88e00cfb | ||
|
|
0c8c539e9b | ||
|
|
fd549fb986 | ||
|
|
babb775cfb | ||
|
|
eef9e470c9 | ||
|
|
3002c6318a | ||
|
|
6d0bceaebd | ||
|
|
aa51d6fde6 | ||
|
|
136479e218 | ||
|
|
19a2742354 | ||
|
|
45aac96dd3 | ||
|
|
6f21ae8939 | ||
|
|
add98f4eeb | ||
|
|
fe231f72b6 | ||
|
|
b308fde480 | ||
|
|
f3e14ff806 | ||
|
|
79ef9bdf1c | ||
|
|
a3e938aee9 | ||
|
|
b19a6155f4 | ||
|
|
801f7342b1 | ||
|
|
4829fa0f35 | ||
|
|
3671f4208e | ||
|
|
e8c51181ee | ||
|
|
3ccbb4d6fb | ||
|
|
93fe457e99 | ||
|
|
afac657aaa | ||
|
|
3e5c32860a | ||
|
|
d577bb38b6 | ||
|
|
418bc32b39 | ||
|
|
7148ea0596 | ||
|
|
87adb17df4 | ||
|
|
3fcee3762d | ||
|
|
1f014779e4 | ||
|
|
97879e73ef | ||
|
|
13d4cd3237 | ||
|
|
73e835885b | ||
|
|
2524c908fc | ||
|
|
0e71d81bb3 | ||
|
|
a47864888f | ||
|
|
9b61ac807c | ||
|
|
bc200dc555 | ||
|
|
2c18b84517 | ||
|
|
fe7b651c56 | ||
|
|
9b8f160788 | ||
|
|
801d5e2fc2 | ||
|
|
cecdd28e04 | ||
|
|
d364df1cd6 | ||
|
|
f51bc03686 | ||
|
|
c010d50716 | ||
|
|
acddb86f3a | ||
|
|
4fde0120ab | ||
|
|
592a354eef | ||
|
|
bd66cf3d8b | ||
|
|
e6e5174734 | ||
|
|
13ade82677 | ||
|
|
ce9eb8d20a | ||
|
|
dd47c0a284 | ||
|
|
f725ab1b31 | ||
|
|
7ce4192c52 | ||
|
|
c06aafb642 | ||
|
|
b298c5416c | ||
|
|
94abf302cb | ||
|
|
fcc5534e66 | ||
|
|
56c0e4d575 | ||
|
|
8a10db618e | ||
|
|
1fe66f0291 | ||
|
|
ced977c443 | ||
|
|
6c2ffbae52 | ||
|
|
be2f54fac9 | ||
|
|
87b5e56378 | ||
|
|
3a5764ed34 | ||
|
|
91aee50ea7 | ||
|
|
e5ccedf491 | ||
|
|
f620666a58 | ||
|
|
594c63e5d6 | ||
|
|
67d9051890 | ||
|
|
be96232127 | ||
|
|
3b5bc7a784 | ||
|
|
5e92f437a1 | ||
|
|
eabd9d312f | ||
|
|
0da6fe78ac | ||
|
|
be990380a0 | ||
|
|
9c0bc48420 | ||
|
|
5c0d34793e | ||
|
|
37fc550652 | ||
|
|
2c1d6ac212 | ||
|
|
8c699c1b26 | ||
|
|
c620fa9011 | ||
|
|
f16fd60211 | ||
|
|
9674e59d26 | ||
|
|
643c5e125a | ||
|
|
e5099e1daa | ||
|
|
3e621bbec1 | ||
|
|
bb1d5a61c0 | ||
|
|
fd3d0be2d8 | ||
|
|
ae623258f3 | ||
|
|
cda281f08b | ||
|
|
9f8e7a6efa | ||
|
|
57643dd2b6 | ||
|
|
6bc8a78cfe | ||
|
|
d2700e97fb | ||
|
|
c4dd81dc9a | ||
|
|
e9b06d7cde | ||
|
|
6e6ea69611 | ||
|
|
b082b5eb1b | ||
|
|
9648d78453 | ||
|
|
16c17eb077 | ||
|
|
2dc8718041 | ||
|
|
a330d6636e | ||
|
|
322c4be145 | ||
|
|
a3596ff60d | ||
|
|
e11d8132f8 | ||
|
|
59877dd728 | ||
|
|
5f7ffef238 | ||
|
|
41c10f5688 | ||
|
|
d7ac99f603 | ||
|
|
1616daae6a | ||
|
|
a1092d8f92 | ||
|
|
34ca9f138f | ||
|
|
df3f1aa3ca | ||
|
|
bf805cf477 | ||
|
|
ecb08e69be | ||
|
|
28c1e3f11b | ||
|
|
403667aec1 | ||
|
|
22f377e2fb |
16
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
16
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@@ -11,6 +11,8 @@ body:
|
||||
- Please choose | 请选择
|
||||
- Pip Install (I ignored requirements.txt)
|
||||
- Pip Install (I used latest requirements.txt)
|
||||
- OneKeyInstall (一键安装脚本-windows)
|
||||
- OneKeyInstall (一键安装脚本-mac)
|
||||
- Anaconda (I ignored requirements.txt)
|
||||
- Anaconda (I used latest requirements.txt)
|
||||
- Docker(Windows/Mac)
|
||||
@@ -32,7 +34,7 @@ body:
|
||||
- Others | 非最新版
|
||||
validations:
|
||||
required: true
|
||||
|
||||
|
||||
- type: dropdown
|
||||
id: os
|
||||
attributes:
|
||||
@@ -45,7 +47,7 @@ body:
|
||||
- Docker
|
||||
validations:
|
||||
required: true
|
||||
|
||||
|
||||
- type: textarea
|
||||
id: describe
|
||||
attributes:
|
||||
@@ -53,7 +55,7 @@ body:
|
||||
description: Describe the bug | 简述
|
||||
validations:
|
||||
required: true
|
||||
|
||||
|
||||
- type: textarea
|
||||
id: screenshot
|
||||
attributes:
|
||||
@@ -61,15 +63,9 @@ body:
|
||||
description: Screen Shot | 有帮助的截图
|
||||
validations:
|
||||
required: true
|
||||
|
||||
|
||||
- type: textarea
|
||||
id: traceback
|
||||
attributes:
|
||||
label: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback(如有) + 帮助我们复现的测试材料样本(如有)
|
||||
description: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback(如有) + 帮助我们复现的测试材料样本(如有)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
5
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
5
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
@@ -21,8 +21,3 @@ body:
|
||||
attributes:
|
||||
label: Feature Request | 功能请求
|
||||
description: Feature Request | 功能请求
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||
name: Create and publish a Docker image for ChatGLM support
|
||||
name: build-with-all-capacity
|
||||
|
||||
on:
|
||||
push:
|
||||
@@ -8,7 +8,7 @@ on:
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}_jittorllms
|
||||
IMAGE_NAME: ${{ github.repository }}_with_all_capacity
|
||||
|
||||
jobs:
|
||||
build-and-push-image:
|
||||
@@ -39,6 +39,6 @@ jobs:
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
file: docs/GithubAction+JittorLLMs
|
||||
file: docs/GithubAction+AllCapacity
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
44
.github/workflows/build-with-audio-assistant.yml
vendored
Normal file
44
.github/workflows/build-with-audio-assistant.yml
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||
name: build-with-audio-assistant
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'master'
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}_audio_assistant
|
||||
|
||||
jobs:
|
||||
build-and-push-image:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata (tags, labels) for Docker
|
||||
id: meta
|
||||
uses: docker/metadata-action@v4
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v4
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
file: docs/GithubAction+NoLocal+AudioAssistant
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
2
.github/workflows/build-with-chatglm.yml
vendored
2
.github/workflows/build-with-chatglm.yml
vendored
@@ -1,5 +1,5 @@
|
||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||
name: Create and publish a Docker image for ChatGLM support
|
||||
name: build-with-chatglm
|
||||
|
||||
on:
|
||||
push:
|
||||
|
||||
51
.github/workflows/build-with-latex-arm.yml
vendored
Normal file
51
.github/workflows/build-with-latex-arm.yml
vendored
Normal file
@@ -0,0 +1,51 @@
|
||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||
name: build-with-latex-arm
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- "master"
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}_with_latex_arm
|
||||
|
||||
jobs:
|
||||
build-and-push-image:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata (tags, labels) for Docker
|
||||
id: meta
|
||||
uses: docker/metadata-action@v4
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
platforms: linux/arm64
|
||||
file: docs/GithubAction+NoLocal+Latex
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
2
.github/workflows/build-with-latex.yml
vendored
2
.github/workflows/build-with-latex.yml
vendored
@@ -1,5 +1,5 @@
|
||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||
name: Create and publish a Docker image for Latex support
|
||||
name: build-with-latex
|
||||
|
||||
on:
|
||||
push:
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
|
||||
name: Create and publish a Docker image
|
||||
name: build-without-local-llms
|
||||
|
||||
on:
|
||||
push:
|
||||
|
||||
56
.github/workflows/conda-pack-windows.yml
vendored
Normal file
56
.github/workflows/conda-pack-windows.yml
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
name: Create Conda Environment Package
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: windows-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Miniconda
|
||||
uses: conda-incubator/setup-miniconda@v3
|
||||
with:
|
||||
auto-activate-base: true
|
||||
activate-environment: ""
|
||||
|
||||
- name: Create new Conda environment
|
||||
shell: bash -l {0}
|
||||
run: |
|
||||
conda create -n gpt python=3.11 -y
|
||||
conda activate gpt
|
||||
|
||||
- name: Install requirements
|
||||
shell: bash -l {0}
|
||||
run: |
|
||||
conda activate gpt
|
||||
pip install -r requirements.txt
|
||||
|
||||
- name: Install conda-pack
|
||||
shell: bash -l {0}
|
||||
run: |
|
||||
conda activate gpt
|
||||
conda install conda-pack -y
|
||||
|
||||
- name: Pack conda environment
|
||||
shell: bash -l {0}
|
||||
run: |
|
||||
conda activate gpt
|
||||
conda pack -n gpt -o gpt.tar.gz
|
||||
|
||||
- name: Create workspace zip
|
||||
shell: pwsh
|
||||
run: |
|
||||
mkdir workspace
|
||||
Get-ChildItem -Exclude "workspace" | Copy-Item -Destination workspace -Recurse
|
||||
Remove-Item -Path workspace/.git* -Recurse -Force -ErrorAction SilentlyContinue
|
||||
Copy-Item gpt.tar.gz workspace/ -Force
|
||||
|
||||
- name: Upload packed files
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: gpt-academic-package
|
||||
path: workspace
|
||||
24
.github/workflows/stale.yml
vendored
Normal file
24
.github/workflows/stale.yml
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
# This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time.
|
||||
#
|
||||
# You can adjust the behavior by modifying this file.
|
||||
# For more information, see:
|
||||
# https://github.com/actions/stale
|
||||
|
||||
name: 'Close stale issues and PRs'
|
||||
on:
|
||||
schedule:
|
||||
- cron: '*/30 * * * *'
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
issues: write
|
||||
pull-requests: read
|
||||
|
||||
steps:
|
||||
- uses: actions/stale@v8
|
||||
with:
|
||||
stale-issue-message: 'This issue is stale because it has been open 100 days with no activity. Remove stale label or comment or this will be closed in 7 days.'
|
||||
days-before-stale: 100
|
||||
days-before-close: 7
|
||||
17
.gitignore
vendored
17
.gitignore
vendored
@@ -131,6 +131,9 @@ dmypy.json
|
||||
# Pyre type checker
|
||||
.pyre/
|
||||
|
||||
# macOS files
|
||||
.DS_Store
|
||||
|
||||
.vscode
|
||||
.idea
|
||||
|
||||
@@ -146,7 +149,17 @@ debug*
|
||||
private*
|
||||
crazy_functions/test_project/pdf_and_word
|
||||
crazy_functions/test_samples
|
||||
request_llm/jittorllms
|
||||
request_llms/jittorllms
|
||||
multi-language
|
||||
request_llm/moss
|
||||
request_llms/moss
|
||||
media
|
||||
flagged
|
||||
request_llms/ChatGLM-6b-onnx-u8s8
|
||||
.pre-commit-config.yaml
|
||||
test.*
|
||||
temp.*
|
||||
objdump*
|
||||
*.min.*.js
|
||||
TODO
|
||||
experimental_mods
|
||||
search_results
|
||||
|
||||
35
Dockerfile
35
Dockerfile
@@ -1,28 +1,41 @@
|
||||
# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
|
||||
# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic .
|
||||
# 如何运行: docker run --rm -it --net=host gpt-academic
|
||||
# 此Dockerfile适用于“无本地模型”的迷你运行环境构建
|
||||
# 如果需要使用chatglm等本地模型或者latex运行依赖,请参考 docker-compose.yml
|
||||
# - 如何构建: 先修改 `config.py`, 然后 `docker build -t gpt-academic . `
|
||||
# - 如何运行(Linux下): `docker run --rm -it --net=host gpt-academic `
|
||||
# - 如何运行(其他操作系统,选择任意一个固定端口50923): `docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic `
|
||||
FROM python:3.11
|
||||
|
||||
|
||||
# 非必要步骤,更换pip源 (以下三行,可以删除)
|
||||
RUN echo '[global]' > /etc/pip.conf && \
|
||||
echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
|
||||
echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
|
||||
|
||||
|
||||
# 语音输出功能(以下两行,第一行更换阿里源,第二行安装ffmpeg,都可以删除)
|
||||
RUN UBUNTU_VERSION=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release); echo "deb https://mirrors.aliyun.com/debian/ $UBUNTU_VERSION main non-free contrib" > /etc/apt/sources.list; apt-get update
|
||||
RUN apt-get install ffmpeg -y
|
||||
RUN apt-get clean
|
||||
|
||||
|
||||
# 进入工作路径(必要)
|
||||
WORKDIR /gpt
|
||||
|
||||
|
||||
|
||||
|
||||
# 安装依赖
|
||||
# 安装大部分依赖,利用Docker缓存加速以后的构建 (以下两行,可以删除)
|
||||
COPY requirements.txt ./
|
||||
COPY ./docs/gradio-3.32.2-py3-none-any.whl ./docs/gradio-3.32.2-py3-none-any.whl
|
||||
RUN pip3 install -r requirements.txt
|
||||
# 装载项目文件
|
||||
|
||||
|
||||
# 装载项目文件,安装剩余依赖(必要)
|
||||
COPY . .
|
||||
RUN pip3 install -r requirements.txt
|
||||
|
||||
# 可选步骤,用于预热模块
|
||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
|
||||
# 启动
|
||||
# 非必要步骤,用于预热模块(可以删除)
|
||||
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
|
||||
RUN python3 -m pip cache purge
|
||||
|
||||
|
||||
# 启动(必要)
|
||||
CMD ["python3", "-u", "main.py"]
|
||||
|
||||
430
README.md
430
README.md
@@ -1,66 +1,104 @@
|
||||
> **Note**
|
||||
> [!IMPORTANT]
|
||||
> `master主分支`最新动态(2025.3.2): 修复大量代码typo / 联网组件支持Jina的api / 增加deepseek-r1支持
|
||||
> `frontier开发分支`最新动态(2024.12.9): 更新对话时间线功能,优化xelatex论文翻译
|
||||
> `wiki文档`最新动态(2024.12.5): 更新ollama接入指南
|
||||
>
|
||||
> 2023.5.27 对Gradio依赖进行了调整,Fork并解决了官方Gradio的若干Bugs。请及时**更新代码**并重新更新pip依赖。安装依赖时,请严格选择`requirements.txt`中**指定的版本**:
|
||||
>
|
||||
> `pip install -r requirements.txt`
|
||||
> 2025.2.2: 三分钟快速接入最强qwen2.5-max[视频](https://www.bilibili.com/video/BV1LeFuerEG4)
|
||||
> 2025.2.1: 支持自定义字体
|
||||
> 2024.10.10: 突发停电,紧急恢复了提供[whl包](https://drive.google.com/drive/folders/14kR-3V-lIbvGxri4AHc8TpiA1fqsw7SK?usp=sharing)的文件服务器
|
||||
> 2024.5.1: 加入Doc2x翻译PDF论文的功能,[查看详情](https://github.com/binary-husky/gpt_academic/wiki/Doc2x)
|
||||
> 2024.3.11: 全力支持Qwen、GLM、DeepseekCoder等中文大语言模型! SoVits语音克隆模块,[查看详情](https://www.bilibili.com/video/BV1Rp421S7tF/)
|
||||
> 2024.1.17: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。
|
||||
|
||||
<br>
|
||||
|
||||
<div align=center>
|
||||
<h1 aligh="center">
|
||||
<img src="docs/logo.png" width="40"> GPT 学术优化 (GPT Academic)
|
||||
</h1>
|
||||
|
||||
[![Github][Github-image]][Github-url]
|
||||
[![License][License-image]][License-url]
|
||||
[![Releases][Releases-image]][Releases-url]
|
||||
[![Installation][Installation-image]][Installation-url]
|
||||
[![Wiki][Wiki-image]][Wiki-url]
|
||||
[![PR][PRs-image]][PRs-url]
|
||||
|
||||
[Github-image]: https://img.shields.io/badge/github-12100E.svg?style=flat-square
|
||||
[License-image]: https://img.shields.io/github/license/binary-husky/gpt_academic?label=License&style=flat-square&color=orange
|
||||
[Releases-image]: https://img.shields.io/github/release/binary-husky/gpt_academic?label=Release&style=flat-square&color=blue
|
||||
[Installation-image]: https://img.shields.io/badge/dynamic/json?color=blue&url=https://raw.githubusercontent.com/binary-husky/gpt_academic/master/version&query=$.version&label=Installation&style=flat-square
|
||||
[Wiki-image]: https://img.shields.io/badge/wiki-项目文档-black?style=flat-square
|
||||
[PRs-image]: https://img.shields.io/badge/PRs-welcome-pink?style=flat-square
|
||||
|
||||
[Github-url]: https://github.com/binary-husky/gpt_academic
|
||||
[License-url]: https://github.com/binary-husky/gpt_academic/blob/master/LICENSE
|
||||
[Releases-url]: https://github.com/binary-husky/gpt_academic/releases
|
||||
[Installation-url]: https://github.com/binary-husky/gpt_academic#installation
|
||||
[Wiki-url]: https://github.com/binary-husky/gpt_academic/wiki
|
||||
[PRs-url]: https://github.com/binary-husky/gpt_academic/pulls
|
||||
|
||||
|
||||
</div>
|
||||
<br>
|
||||
|
||||
**如果喜欢这个项目,请给它一个Star;如果您发明了好用的快捷键或插件,欢迎发pull requests!**
|
||||
|
||||
If you like this project, please give it a Star.
|
||||
Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanese.md) | [한국어](docs/README.Korean.md) | [Русский](docs/README.Russian.md) | [Français](docs/README.French.md). All translations have been provided by the project itself. To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
|
||||
<br>
|
||||
|
||||
> [!NOTE]
|
||||
> 1.本项目中每个文件的功能都在[自译解报告](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)`self_analysis.md`详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题请查阅wiki。
|
||||
> [](#installation) [](https://github.com/binary-husky/gpt_academic/releases) [](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) []([https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki))
|
||||
>
|
||||
> 2.本项目兼容并鼓励尝试国内中文大语言基座模型如通义千问,智谱GLM等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交即可生效。
|
||||
|
||||
# <img src="docs/logo.png" width="40" > GPT 学术优化 (GPT Academic)
|
||||
|
||||
**如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发pull requests**
|
||||
|
||||
If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
|
||||
To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> 1.请注意只有**红颜色**标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR!
|
||||
>
|
||||
> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。[安装方法](#installation)。
|
||||
>
|
||||
> 3.本项目兼容并鼓励尝试国产大语言模型chatglm和RWKV, 盘古等等。支持多个api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,api2d-key3"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
|
||||
|
||||
|
||||
|
||||
<br><br>
|
||||
|
||||
<div align="center">
|
||||
|
||||
功能 | 描述
|
||||
功能(⭐= 近期新增功能) | 描述
|
||||
--- | ---
|
||||
一键润色 | 支持一键润色、一键查找论文语法错误
|
||||
一键中英互译 | 一键中英互译
|
||||
一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释
|
||||
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱GLM4](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
|
||||
⭐支持mermaid图像渲染 | 支持让GPT生成[流程图](https://www.bilibili.com/video/BV18c41147H9/)、状态转移图、甘特图、饼状图、GitGraph等等(3.7版本)
|
||||
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
|
||||
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
|
||||
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
|
||||
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
|
||||
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
|
||||
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
||||
模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
|
||||
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
|
||||
读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [函数插件] 一键解读latex/pdf论文全文并生成摘要
|
||||
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文
|
||||
批量注释生成 | [函数插件] 一键批量生成函数注释
|
||||
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?
|
||||
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
|
||||
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
||||
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
||||
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||
互联网信息聚合+GPT | [函数插件] 一键[让GPT先从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck),再回答问题,让信息永不过时
|
||||
⭐Arxiv论文精细翻译 | [函数插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),迄今为止最好的论文翻译工具⭐
|
||||
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [插件] 一键剖析Python/C/C++/Java/Lua/...项目树 或 [自我剖析](https://www.bilibili.com/video/BV1cj411A7VW)
|
||||
读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [插件] 一键解读latex/pdf论文全文并生成摘要
|
||||
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
|
||||
批量注释生成 | [插件] 一键批量生成函数注释
|
||||
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README.English.md)了吗?就是出自他的手笔
|
||||
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
||||
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
||||
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
|
||||
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||
互联网信息聚合+GPT | [插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
|
||||
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
||||
多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
|
||||
启动暗色gradio[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)同时伺候的感觉一定会很不错吧?
|
||||
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama),[RWKV](https://github.com/BlinkDL/ChatRWKV)和[盘古α](https://openi.org.cn/pangu/)
|
||||
更多新功能展示(图像生成等) …… | 见本文档结尾处 ……
|
||||
|
||||
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
|
||||
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
||||
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
|
||||
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
||||
</div>
|
||||
|
||||
|
||||
- 新界面(修改`config.py`中的LAYOUT选项即可实现“左右布局”和“上下布局”的切换)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/279702205-d81137c3-affd-4cd1-bb5e-b15610389762.gif" width="700" >
|
||||
</div>
|
||||
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/70ff1ec5-e589-4561-a29e-b831079b37fb.gif" width="700" >
|
||||
</div>
|
||||
|
||||
|
||||
- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放粘贴板
|
||||
- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放剪贴板
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
|
||||
</div>
|
||||
@@ -70,64 +108,105 @@ chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
|
||||
</div>
|
||||
|
||||
- 如果输出包含公式,会同时以tex形式和渲染形式显示,方便复制和阅读
|
||||
- 如果输出包含公式,会以tex形式和渲染形式同时显示,方便复制和阅读
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
|
||||
</div>
|
||||
|
||||
- 懒得看项目代码?整个工程直接给chatgpt炫嘴里
|
||||
- 懒得看项目代码?直接把整个工程炫ChatGPT嘴里
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
|
||||
</div>
|
||||
|
||||
- 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
|
||||
- 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + GPT4)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
|
||||
</div>
|
||||
|
||||
---
|
||||
# Installation
|
||||
## 安装-方法1:直接运行 (Windows, Linux or MacOS)
|
||||
<br><br>
|
||||
|
||||
1. 下载项目
|
||||
```sh
|
||||
git clone https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
# Installation
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A{"安装方法"} --> W1("I 🔑直接运行 (Windows, Linux or MacOS)")
|
||||
W1 --> W11["1 Python pip包管理依赖"]
|
||||
W1 --> W12["2 Anaconda包管理依赖(推荐⭐)"]
|
||||
|
||||
A --> W2["II 🐳使用Docker (Windows, Linux or MacOS)"]
|
||||
|
||||
W2 --> k1["1 部署项目全部能力的大镜像(推荐⭐)"]
|
||||
W2 --> k2["2 仅在线模型(GPT, GLM4等)镜像"]
|
||||
W2 --> k3["3 在线模型 + Latex的大镜像"]
|
||||
|
||||
A --> W4["IV 🚀其他部署方法"]
|
||||
W4 --> C1["1 Windows/MacOS 一键安装运行脚本(推荐⭐)"]
|
||||
W4 --> C2["2 Huggingface, Sealos远程部署"]
|
||||
W4 --> C4["3 其他 ..."]
|
||||
```
|
||||
|
||||
2. 配置API_KEY
|
||||
### 安装方法I:直接运行 (Windows, Linux or MacOS)
|
||||
|
||||
在`config.py`中,配置API KEY等设置,[点击查看特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1) 。
|
||||
1. 下载项目
|
||||
|
||||
(P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。P.S.项目同样支持通过`环境变量`配置大多数选项,环境变量的书写格式参考`docker-compose`文件。读取优先级: `环境变量` > `config_private.py` > `config.py`)
|
||||
```sh
|
||||
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
|
||||
cd gpt_academic
|
||||
```
|
||||
|
||||
2. 配置API_KEY等变量
|
||||
|
||||
在`config.py`中,配置API KEY等变量。[特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1)、[Wiki-项目配置说明](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
|
||||
|
||||
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解以上读取逻辑,我们强烈建议您在`config.py`同路径下创建一个名为`config_private.py`的新配置文件,并使用`config_private.py`配置项目,从而确保自动更新时不会丢失配置 」。
|
||||
|
||||
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
|
||||
|
||||
|
||||
3. 安装依赖
|
||||
```sh
|
||||
# (选择I: 如熟悉python)(python版本3.9以上,越新越好),备注:使用官方pip源或者阿里pip源,临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
```sh
|
||||
# (选择I: 如熟悉python, python推荐版本 3.9 ~ 3.11)备注:使用官方pip源或者阿里pip源, 临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
|
||||
# (选择II: 如不熟悉python)使用anaconda,步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # 创建anaconda环境
|
||||
conda activate gptac_venv # 激活anaconda环境
|
||||
python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤
|
||||
```
|
||||
# (选择II: 使用Anaconda)步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr):
|
||||
conda create -n gptac_venv python=3.11 # 创建anaconda环境
|
||||
conda activate gptac_venv # 激活anaconda环境
|
||||
python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤
|
||||
```
|
||||
|
||||
|
||||
<details><summary>如果需要支持清华ChatGLM/复旦MOSS作为后端,请点击展开此处</summary>
|
||||
<details><summary>如果需要支持清华ChatGLM系列/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
|
||||
<p>
|
||||
|
||||
【可选步骤】如果需要支持清华ChatGLM/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
|
||||
【可选步骤】如果需要支持清华ChatGLM系列/复旦MOSS作为后端,需要额外安装更多依赖(前提条件:熟悉Python + 用过Pytorch + 电脑配置够强):
|
||||
|
||||
```sh
|
||||
# 【可选步骤I】支持清华ChatGLM。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
||||
python -m pip install -r request_llm/requirements_chatglm.txt
|
||||
# 【可选步骤I】支持清华ChatGLM3。清华ChatGLM备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda; 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
|
||||
python -m pip install -r request_llms/requirements_chatglm.txt
|
||||
|
||||
# 【可选步骤II】支持复旦MOSS
|
||||
python -m pip install -r request_llm/requirements_moss.txt
|
||||
git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # 注意执行此行代码时,必须处于项目根路径
|
||||
# 【可选步骤II】支持清华ChatGLM4 注意:此模型至少需要24G显存
|
||||
python -m pip install -r request_llms/requirements_chatglm4.txt
|
||||
# 可使用modelscope下载ChatGLM4模型
|
||||
# pip install modelscope
|
||||
# modelscope download --model ZhipuAI/glm-4-9b-chat --local_dir ./THUDM/glm-4-9b-chat
|
||||
|
||||
# 【可选步骤III】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案):
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
# 【可选步骤III】支持复旦MOSS
|
||||
python -m pip install -r request_llms/requirements_moss.txt
|
||||
git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llms/moss # 注意执行此行代码时,必须处于项目根路径
|
||||
|
||||
# 【可选步骤IV】支持RWKV Runner
|
||||
参考wiki:https://github.com/binary-husky/gpt_academic/wiki/%E9%80%82%E9%85%8DRWKV-Runner
|
||||
|
||||
# 【可选步骤V】确保config.py配置文件的AVAIL_LLM_MODELS包含了期望的模型,目前支持的全部模型如下(jittorllms系列目前仅支持docker方案):
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
|
||||
# 【可选步骤VI】支持本地模型INT8,INT4量化(这里所指的模型本身不是量化版本,目前deepseek-coder支持,后面测试后会加入更多模型量化选择)
|
||||
pip install bitsandbyte
|
||||
# windows用户安装bitsandbytes需要使用下面bitsandbytes-windows-webui
|
||||
python -m pip install bitsandbytes --prefer-binary --extra-index-url=https://jllllll.github.io/bitsandbytes-windows-webui
|
||||
pip install -U git+https://github.com/huggingface/transformers.git
|
||||
pip install -U git+https://github.com/huggingface/accelerate.git
|
||||
pip install peft
|
||||
```
|
||||
|
||||
</p>
|
||||
@@ -136,93 +215,85 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-
|
||||
|
||||
|
||||
4. 运行
|
||||
```sh
|
||||
python main.py
|
||||
```
|
||||
```sh
|
||||
python main.py
|
||||
```
|
||||
|
||||
## 安装-方法2:使用Docker
|
||||
### 安装方法II:使用Docker
|
||||
|
||||
1. 仅ChatGPT(推荐大多数人选择,等价于docker-compose方案1)
|
||||
0. 部署项目的全部能力(这个是包含cuda和latex的大型镜像。但如果您网速慢、硬盘小,则不推荐该方法部署完整项目)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml)
|
||||
|
||||
``` sh
|
||||
git clone https://github.com/binary-husky/gpt_academic.git # 下载项目
|
||||
cd gpt_academic # 进入路径
|
||||
nano config.py # 用任意文本编辑器编辑config.py, 配置 “Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等
|
||||
docker build -t gpt-academic . # 安装
|
||||
``` sh
|
||||
# 修改docker-compose.yml,保留方案0并删除其他方案。然后运行:
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
#(最后一步-选择1)在Linux环境下,用`--net=host`更方便快捷
|
||||
docker run --rm -it --net=host gpt-academic
|
||||
#(最后一步-选择2)在macOS/windows环境下,只能用-p选项将容器上的端口(例如50923)暴露给主机上的端口
|
||||
docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
|
||||
```
|
||||
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用docker-compose获取Latex功能(修改docker-compose.yml,保留方案4并删除其他方案)。
|
||||
1. 仅ChatGPT + GLM4 + 文心一言+spark等在线模型(推荐大多数人选择)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
|
||||
|
||||
2. ChatGPT + ChatGLM + MOSS(需要熟悉Docker)
|
||||
``` sh
|
||||
# 修改docker-compose.yml,保留方案1并删除其他方案。然后运行:
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
``` sh
|
||||
# 修改docker-compose.yml,保留方案2并删除其他方案。修改docker-compose.yml中方案2的配置,参考其中注释即可
|
||||
docker-compose up
|
||||
```
|
||||
P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以直接使用方案4或者方案0获取Latex功能。
|
||||
|
||||
3. ChatGPT + LLAMA + 盘古 + RWKV(需要熟悉Docker)
|
||||
``` sh
|
||||
# 修改docker-compose.yml,保留方案3并删除其他方案。修改docker-compose.yml中方案3的配置,参考其中注释即可
|
||||
docker-compose up
|
||||
```
|
||||
2. ChatGPT + GLM3 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
|
||||
[](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)
|
||||
|
||||
``` sh
|
||||
# 修改docker-compose.yml,保留方案2并删除其他方案。然后运行:
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
|
||||
## 安装-方法3:其他部署姿势
|
||||
1. 一键运行脚本。
|
||||
完全不熟悉python环境的Windows用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。
|
||||
脚本的贡献来源是[oobabooga](https://github.com/oobabooga/one-click-installers)。
|
||||
### 安装方法III:其他部署方法
|
||||
1. **Windows一键运行脚本**。
|
||||
完全不熟悉python环境的Windows用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。脚本贡献来源:[oobabooga](https://github.com/oobabooga/one-click-installers)。
|
||||
|
||||
2. 使用docker-compose运行。
|
||||
请阅读docker-compose.yml后,按照其中的提示操作即可
|
||||
2. 使用第三方API、Azure等、文心一言、星火等,见[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)
|
||||
|
||||
3. 如何使用反代URL
|
||||
按照`config.py`中的说明配置API_URL_REDIRECT即可。
|
||||
3. 云服务器远程部署避坑指南。
|
||||
请访问[云服务器远程部署wiki](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
|
||||
4. 微软云AzureAPI
|
||||
按照`config.py`中的说明配置即可(AZURE_ENDPOINT等四个配置)
|
||||
4. 在其他平台部署&二级网址部署
|
||||
- 使用Sealos[一键部署](https://github.com/binary-husky/gpt_academic/issues/993)。
|
||||
- 使用WSL2(Windows Subsystem for Linux 子系统)。请访问[部署wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
- 如何在二级网址(如`http://localhost/subpath`)下运行。请访问[FastAPI运行说明](docs/WithFastapi.md)
|
||||
|
||||
5. 远程云服务器部署(需要云服务器知识与经验)。
|
||||
请访问[部署wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
|
||||
<br><br>
|
||||
|
||||
6. 使用WSL2(Windows Subsystem for Linux 子系统)。
|
||||
请访问[部署wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
||||
|
||||
7. 如何在二级网址(如`http://localhost/subpath`)下运行。
|
||||
请访问[FastAPI运行说明](docs/WithFastapi.md)
|
||||
|
||||
---
|
||||
# Advanced Usage
|
||||
## 自定义新的便捷按钮 / 自定义函数插件
|
||||
### I:自定义新的便捷按钮(学术快捷键)
|
||||
|
||||
1. 自定义新的便捷按钮(学术快捷键)
|
||||
任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序即可。(如果按钮已经添加成功并可见,那么前缀、后缀都支持热修改,无需重启程序即可生效。)
|
||||
例如
|
||||
```
|
||||
现在已可以通过UI中的`界面外观`菜单中的`自定义菜单`添加新的便捷按钮。如果需要在代码中定义,请使用任意文本编辑器打开`core_functional.py`,添加如下条目即可:
|
||||
|
||||
```python
|
||||
"超级英译中": {
|
||||
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
||||
"Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n",
|
||||
|
||||
"Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n",
|
||||
|
||||
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。
|
||||
"Suffix": "",
|
||||
},
|
||||
```
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
|
||||
</div>
|
||||
|
||||
2. 自定义函数插件
|
||||
|
||||
### II:自定义函数插件
|
||||
编写强大的函数插件来执行任何你想得到的和想不到的任务。
|
||||
本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。
|
||||
详情请参考[函数插件指南](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
|
||||
|
||||
---
|
||||
# Latest Update
|
||||
## 新功能动态
|
||||
<br><br>
|
||||
|
||||
# Updates
|
||||
### I:动态
|
||||
|
||||
1. 对话保存功能。在函数插件区调用 `保存当前的对话` 即可将当前对话保存为可读+可复原的html文件,
|
||||
另外在函数插件区(下拉菜单)调用 `载入对话历史存档` ,即可还原之前的会话。
|
||||
@@ -237,10 +308,13 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/9fdcc391-f823-464f-9322-f8719677043b" height="250" >
|
||||
</div>
|
||||
|
||||
3. 生成报告。大部分插件都会在执行结束后,生成工作报告
|
||||
3. 虚空终端(从自然语言输入中,理解用户意图+自动调用其他插件)
|
||||
|
||||
- 步骤一:输入 “ 请调用插件翻译PDF论文,地址为https://openreview.net/pdf?id=rJl0r3R9KX ”
|
||||
- 步骤二:点击“虚空终端”
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="250" >
|
||||
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="250" >
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/66f1b044-e9ff-4eed-9126-5d4f3668f1ed" width="500" >
|
||||
</div>
|
||||
|
||||
4. 模块化功能设计,简单的接口却能支持强大的功能
|
||||
@@ -260,31 +334,44 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236432361-67739153-73e8-43fe-8111-b61296edabd9.png" width="500" >
|
||||
</div>
|
||||
|
||||
7. 新增MOSS大语言模型支持
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/236639178-92836f37-13af-4fdd-984d-b4450fe30336.png" width="500" >
|
||||
</div>
|
||||
|
||||
8. OpenAI图像生成
|
||||
7. OpenAI图像生成
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/bc7ab234-ad90-48a0-8d62-f703d9e74665" width="500" >
|
||||
</div>
|
||||
|
||||
9. OpenAI音频解析与总结
|
||||
8. 基于mermaid的流图、脑图绘制
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/709ccf95-3aee-498a-934a-e1c22d3d5d5b" width="500" >
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/c518b82f-bd53-46e2-baf5-ad1b081c1da4" width="500" >
|
||||
</div>
|
||||
|
||||
10. Latex全文校对纠错
|
||||
9. Latex全文校对纠错
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/651ccd98-02c9-4464-91e1-77a6b7d1b033" height="200" > ===>
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/476f66d9-7716-4537-b5c1-735372c25adb" height="200">
|
||||
</div>
|
||||
|
||||
10. 语言、主题切换
|
||||
<div align="center">
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/b6799499-b6fb-4f0c-9c8e-1b441872f4e8" width="500" >
|
||||
</div>
|
||||
|
||||
|
||||
## 版本:
|
||||
- version 3.5(Todo): 使用自然语言调用本项目的所有函数插件(高优先级)
|
||||
|
||||
### II:版本:
|
||||
- version 3.80(TODO): 优化AutoGen插件主题并设计一系列衍生插件
|
||||
- version 3.70: 引入Mermaid绘图,实现GPT画脑图等功能
|
||||
- version 3.60: 引入AutoGen作为新一代插件的基石
|
||||
- version 3.57: 支持GLM3,星火v3,文心一言v4,修复本地模型的并发BUG
|
||||
- version 3.56: 支持动态追加基础功能按钮,新汇报PDF汇总页面
|
||||
- version 3.55: 重构前端界面,引入悬浮窗口与菜单栏
|
||||
- version 3.54: 新增动态代码解释器(Code Interpreter)(待完善)
|
||||
- version 3.53: 支持动态选择不同界面主题,提高稳定性&解决多用户冲突问题
|
||||
- version 3.50: 使用自然语言调用本项目的所有函数插件(虚空终端),支持插件分类,改进UI,设计新主题
|
||||
- version 3.49: 支持百度千帆平台和文心一言
|
||||
- version 3.48: 支持阿里达摩院通义千问,上海AI-Lab书生,讯飞星火
|
||||
- version 3.46: 支持完全脱手操作的实时语音对话
|
||||
- version 3.45: 支持自定义ChatGLM2微调模型
|
||||
- version 3.44: 正式支持Azure,优化界面易用性
|
||||
- version 3.4: +arxiv论文翻译、latex论文批改功能
|
||||
- version 3.3: +互联网信息综合功能
|
||||
- version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合)
|
||||
@@ -292,26 +379,63 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
||||
- version 3.0: 对chatglm和其他小型llm的支持
|
||||
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
|
||||
- version 2.5: 自更新,解决总结大工程源代码时文本过长、token溢出的问题
|
||||
- version 2.4: (1)新增PDF全文翻译功能; (2)新增输入区切换位置的功能; (3)新增垂直布局选项; (4)多线程函数插件优化。
|
||||
- version 2.4: 新增PDF全文翻译功能; 新增输入区切换位置的功能
|
||||
- version 2.3: 增强多线程交互性
|
||||
- version 2.2: 函数插件支持热重载
|
||||
- version 2.1: 可折叠式布局
|
||||
- version 2.0: 引入模块化函数插件
|
||||
- version 1.0: 基础功能
|
||||
|
||||
gpt_academic开发者QQ群-2:610599535
|
||||
GPT Academic开发者QQ群:`610599535`
|
||||
|
||||
- 已知问题
|
||||
- 某些浏览器翻译插件干扰此软件前端的运行
|
||||
- 官方Gradio目前有很多兼容性Bug,请务必使用`requirement.txt`安装Gradio
|
||||
- 官方Gradio目前有很多兼容性问题,请**务必使用`requirement.txt`安装Gradio**
|
||||
|
||||
## 参考与学习
|
||||
```mermaid
|
||||
timeline LR
|
||||
title GPT-Academic项目发展历程
|
||||
section 2.x
|
||||
1.0~2.2: 基础功能: 引入模块化函数插件: 可折叠式布局: 函数插件支持热重载
|
||||
2.3~2.5: 增强多线程交互性: 新增PDF全文翻译功能: 新增输入区切换位置的功能: 自更新
|
||||
2.6: 重构了插件结构: 提高了交互性: 加入更多插件
|
||||
section 3.x
|
||||
3.0~3.1: 对chatglm支持: 对其他小型llm支持: 支持同时问询多个gpt模型: 支持多个apikey负载均衡
|
||||
3.2~3.3: 函数插件支持更多参数接口: 保存对话功能: 解读任意语言代码: 同时询问任意的LLM组合: 互联网信息综合功能
|
||||
3.4: 加入arxiv论文翻译: 加入latex论文批改功能
|
||||
3.44: 正式支持Azure: 优化界面易用性
|
||||
3.46: 自定义ChatGLM2微调模型: 实时语音对话
|
||||
3.49: 支持阿里达摩院通义千问: 上海AI-Lab书生: 讯飞星火: 支持百度千帆平台 & 文心一言
|
||||
3.50: 虚空终端: 支持插件分类: 改进UI: 设计新主题
|
||||
3.53: 动态选择不同界面主题: 提高稳定性: 解决多用户冲突问题
|
||||
3.55: 动态代码解释器: 重构前端界面: 引入悬浮窗口与菜单栏
|
||||
3.56: 动态追加基础功能按钮: 新汇报PDF汇总页面
|
||||
3.57: GLM3, 星火v3: 支持文心一言v4: 修复本地模型的并发BUG
|
||||
3.60: 引入AutoGen
|
||||
3.70: 引入Mermaid绘图: 实现GPT画脑图等功能
|
||||
3.80(TODO): 优化AutoGen插件主题: 设计衍生插件
|
||||
|
||||
```
|
||||
|
||||
|
||||
### III:主题
|
||||
可以通过修改`THEME`选项(config.py)变更主题
|
||||
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)
|
||||
|
||||
|
||||
### IV:本项目的开发分支
|
||||
|
||||
1. `master` 分支: 主分支,稳定版
|
||||
2. `frontier` 分支: 开发分支,测试版
|
||||
3. 如何[接入其他大模型](request_llms/README.md)
|
||||
|
||||
### V:参考与学习
|
||||
|
||||
```
|
||||
代码中参考了很多其他优秀项目中的设计,顺序不分先后:
|
||||
|
||||
# 清华ChatGLM-6B:
|
||||
https://github.com/THUDM/ChatGLM-6B
|
||||
# 清华ChatGLM2-6B:
|
||||
https://github.com/THUDM/ChatGLM2-6B
|
||||
|
||||
# 清华JittorLLMs:
|
||||
https://github.com/Jittor/JittorLLMs
|
||||
|
||||
200
check_proxy.py
200
check_proxy.py
@@ -1,28 +1,77 @@
|
||||
from loguru import logger
|
||||
|
||||
def check_proxy(proxies):
|
||||
def check_proxy(proxies, return_ip=False):
|
||||
"""
|
||||
检查代理配置并返回结果。
|
||||
|
||||
Args:
|
||||
proxies (dict): 包含http和https代理配置的字典。
|
||||
return_ip (bool, optional): 是否返回代理的IP地址。默认为False。
|
||||
|
||||
Returns:
|
||||
str or None: 检查的结果信息或代理的IP地址(如果`return_ip`为True)。
|
||||
"""
|
||||
import requests
|
||||
proxies_https = proxies['https'] if proxies is not None else '无'
|
||||
ip = None
|
||||
try:
|
||||
response = requests.get("https://ipapi.co/json/",
|
||||
proxies=proxies, timeout=4)
|
||||
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4) # ⭐ 执行GET请求以获取代理信息
|
||||
data = response.json()
|
||||
print(f'查询代理的地理位置,返回的结果是{data}')
|
||||
if 'country_name' in data:
|
||||
country = data['country_name']
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
|
||||
if 'ip' in data:
|
||||
ip = data['ip']
|
||||
elif 'error' in data:
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限"
|
||||
print(result)
|
||||
return result
|
||||
alternative, ip = _check_with_backup_source(proxies) # ⭐ 调用备用方法检查代理配置
|
||||
if alternative is None:
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限"
|
||||
else:
|
||||
result = f"代理配置 {proxies_https}, 代理所在地:{alternative}"
|
||||
else:
|
||||
result = f"代理配置 {proxies_https}, 代理数据解析失败:{data}"
|
||||
|
||||
if not return_ip:
|
||||
logger.warning(result)
|
||||
return result
|
||||
else:
|
||||
return ip
|
||||
except:
|
||||
result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效"
|
||||
print(result)
|
||||
return result
|
||||
if not return_ip:
|
||||
logger.warning(result)
|
||||
return result
|
||||
else:
|
||||
return ip
|
||||
|
||||
def _check_with_backup_source(proxies):
|
||||
"""
|
||||
通过备份源检查代理,并获取相应信息。
|
||||
|
||||
Args:
|
||||
proxies (dict): 包含代理信息的字典。
|
||||
|
||||
Returns:
|
||||
tuple: 代理信息(geo)和IP地址(ip)的元组。
|
||||
"""
|
||||
import random, string, requests
|
||||
random_string = ''.join(random.choices(string.ascii_letters + string.digits, k=32))
|
||||
try:
|
||||
res_json = requests.get(f"http://{random_string}.edns.ip-api.com/json", proxies=proxies, timeout=4).json() # ⭐ 执行代理检查和备份源请求
|
||||
return res_json['dns']['geo'], res_json['dns']['ip']
|
||||
except:
|
||||
return None, None
|
||||
|
||||
def backup_and_download(current_version, remote_version):
|
||||
"""
|
||||
一键更新协议:备份和下载
|
||||
一键更新协议:备份当前版本,下载远程版本并解压缩。
|
||||
|
||||
Args:
|
||||
current_version (str): 当前版本号。
|
||||
remote_version (str): 远程版本号。
|
||||
|
||||
Returns:
|
||||
str: 新版本目录的路径。
|
||||
"""
|
||||
from toolbox import get_conf
|
||||
import shutil
|
||||
@@ -36,10 +85,10 @@ def backup_and_download(current_version, remote_version):
|
||||
return new_version_dir
|
||||
os.makedirs(new_version_dir)
|
||||
shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
|
||||
proxies, = get_conf('proxies')
|
||||
r = requests.get(
|
||||
'https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
|
||||
zip_file_path = backup_dir+'/master.zip'
|
||||
proxies = get_conf('proxies')
|
||||
try: r = requests.get('https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
|
||||
except: r = requests.get('https://public.agent-matrix.com/publish/master.zip', proxies=proxies, stream=True)
|
||||
zip_file_path = backup_dir+'/master.zip' # ⭐ 保存备份文件的路径
|
||||
with open(zip_file_path, 'wb+') as f:
|
||||
f.write(r.content)
|
||||
dst_path = new_version_dir
|
||||
@@ -55,6 +104,17 @@ def backup_and_download(current_version, remote_version):
|
||||
def patch_and_restart(path):
|
||||
"""
|
||||
一键更新协议:覆盖和重启
|
||||
|
||||
Args:
|
||||
path (str): 新版本代码所在的路径
|
||||
|
||||
注意事项:
|
||||
如果您的程序没有使用config_private.py私密配置文件,则会将config.py重命名为config_private.py以避免配置丢失。
|
||||
|
||||
更新流程:
|
||||
- 复制最新版本代码到当前目录
|
||||
- 更新pip包依赖
|
||||
- 如果更新失败,则提示手动安装依赖库并重启
|
||||
"""
|
||||
from distutils import dir_util
|
||||
import shutil
|
||||
@@ -62,33 +122,44 @@ def patch_and_restart(path):
|
||||
import sys
|
||||
import time
|
||||
import glob
|
||||
from colorful import print亮黄, print亮绿, print亮红
|
||||
# if not using config_private, move origin config.py as config_private.py
|
||||
from shared_utils.colorful import log亮黄, log亮绿, log亮红
|
||||
|
||||
if not os.path.exists('config_private.py'):
|
||||
print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
|
||||
log亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
|
||||
'另外您可以随时在history子文件夹下找回旧版的程序。')
|
||||
shutil.copyfile('config.py', 'config_private.py')
|
||||
|
||||
path_new_version = glob.glob(path + '/*-master')[0]
|
||||
dir_util.copy_tree(path_new_version, './')
|
||||
print亮绿('代码已经更新,即将更新pip包依赖……')
|
||||
for i in reversed(range(5)): time.sleep(1); print(i)
|
||||
try:
|
||||
dir_util.copy_tree(path_new_version, './') # ⭐ 将最新版本代码复制到当前目录
|
||||
|
||||
log亮绿('代码已经更新,即将更新pip包依赖……')
|
||||
for i in reversed(range(5)): time.sleep(1); log亮绿(i)
|
||||
|
||||
try:
|
||||
import subprocess
|
||||
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt'])
|
||||
except:
|
||||
print亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
|
||||
print亮绿('更新完成,您可以随时在history子文件夹下找回旧版的程序,5s之后重启')
|
||||
print亮红('假如重启失败,您可能需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
|
||||
print(' ------------------------------ -----------------------------------')
|
||||
for i in reversed(range(8)): time.sleep(1); print(i)
|
||||
os.execl(sys.executable, sys.executable, *sys.argv)
|
||||
log亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
|
||||
|
||||
log亮绿('更新完成,您可以随时在history子文件夹下找回旧版的程序,5s之后重启')
|
||||
log亮红('假如重启失败,您可能需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
|
||||
log亮绿(' ------------------------------ -----------------------------------')
|
||||
|
||||
for i in reversed(range(8)): time.sleep(1); log亮绿(i)
|
||||
os.execl(sys.executable, sys.executable, *sys.argv) # 重启程序
|
||||
|
||||
|
||||
def get_current_version():
|
||||
"""
|
||||
获取当前的版本号。
|
||||
|
||||
Returns:
|
||||
str: 当前的版本号。如果无法获取版本号,则返回空字符串。
|
||||
"""
|
||||
import json
|
||||
try:
|
||||
with open('./version', 'r', encoding='utf8') as f:
|
||||
current_version = json.loads(f.read())['version']
|
||||
current_version = json.loads(f.read())['version'] # ⭐ 从读取的json数据中提取版本号
|
||||
except:
|
||||
current_version = ""
|
||||
return current_version
|
||||
@@ -97,15 +168,20 @@ def get_current_version():
|
||||
def auto_update(raise_error=False):
|
||||
"""
|
||||
一键更新协议:查询版本和用户意见
|
||||
|
||||
Args:
|
||||
raise_error (bool, optional): 是否在出错时抛出错误。默认为 False。
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
try:
|
||||
from toolbox import get_conf
|
||||
import requests
|
||||
import time
|
||||
import json
|
||||
proxies, = get_conf('proxies')
|
||||
response = requests.get(
|
||||
"https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
|
||||
proxies = get_conf('proxies')
|
||||
try: response = requests.get("https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
|
||||
except: response = requests.get("https://public.agent-matrix.com/publish/version", proxies=proxies, timeout=5)
|
||||
remote_json_data = json.loads(response.text)
|
||||
remote_version = remote_json_data['version']
|
||||
if remote_json_data["show_feature"]:
|
||||
@@ -115,45 +191,67 @@ def auto_update(raise_error=False):
|
||||
with open('./version', 'r', encoding='utf8') as f:
|
||||
current_version = f.read()
|
||||
current_version = json.loads(current_version)['version']
|
||||
if (remote_version - current_version) >= 0.01:
|
||||
from colorful import print亮黄
|
||||
print亮黄(
|
||||
f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}')
|
||||
print('(1)Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
|
||||
if (remote_version - current_version) >= 0.01-1e-5:
|
||||
from shared_utils.colorful import log亮黄
|
||||
log亮黄(f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}') # ⭐ 在控制台打印新版本信息
|
||||
logger.info('(1)Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
|
||||
user_instruction = input('(2)是否一键更新代码(Y+回车=确认,输入其他/无输入+回车=不更新)?')
|
||||
if user_instruction in ['Y', 'y']:
|
||||
path = backup_and_download(current_version, remote_version)
|
||||
path = backup_and_download(current_version, remote_version) # ⭐ 备份并下载文件
|
||||
try:
|
||||
patch_and_restart(path)
|
||||
patch_and_restart(path) # ⭐ 执行覆盖并重启操作
|
||||
except:
|
||||
msg = '更新失败。'
|
||||
if raise_error:
|
||||
from toolbox import trimmed_format_exc
|
||||
msg += trimmed_format_exc()
|
||||
print(msg)
|
||||
logger.warning(msg)
|
||||
else:
|
||||
print('自动更新程序:已禁用')
|
||||
logger.info('自动更新程序:已禁用')
|
||||
return
|
||||
else:
|
||||
return
|
||||
except:
|
||||
msg = '自动更新程序:已禁用'
|
||||
msg = '自动更新程序:已禁用。建议排查:代理网络配置。'
|
||||
if raise_error:
|
||||
from toolbox import trimmed_format_exc
|
||||
msg += trimmed_format_exc()
|
||||
print(msg)
|
||||
logger.info(msg)
|
||||
|
||||
def warm_up_modules():
|
||||
print('正在执行一些模块的预热...')
|
||||
from request_llm.bridge_all import model_info
|
||||
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||
enc.encode("模块预热", disallowed_special=())
|
||||
enc = model_info["gpt-4"]['tokenizer']
|
||||
enc.encode("模块预热", disallowed_special=())
|
||||
"""
|
||||
预热模块,加载特定模块并执行预热操作。
|
||||
"""
|
||||
logger.info('正在执行一些模块的预热 ...')
|
||||
from toolbox import ProxyNetworkActivate
|
||||
from request_llms.bridge_all import model_info
|
||||
with ProxyNetworkActivate("Warmup_Modules"):
|
||||
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||
enc.encode("模块预热", disallowed_special=())
|
||||
enc = model_info["gpt-4"]['tokenizer']
|
||||
enc.encode("模块预热", disallowed_special=())
|
||||
|
||||
def warm_up_vectordb():
|
||||
"""
|
||||
执行一些模块的预热操作。
|
||||
|
||||
本函数主要用于执行一些模块的预热操作,确保在后续的流程中能够顺利运行。
|
||||
|
||||
⭐ 关键作用:预热模块
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
logger.info('正在执行一些模块的预热 ...')
|
||||
from toolbox import ProxyNetworkActivate
|
||||
with ProxyNetworkActivate("Warmup_Modules"):
|
||||
import nltk
|
||||
with ProxyNetworkActivate("Warmup_Modules"): nltk.download("punkt")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import os
|
||||
os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
|
||||
from toolbox import get_conf
|
||||
proxies, = get_conf('proxies')
|
||||
check_proxy(proxies)
|
||||
proxies = get_conf('proxies')
|
||||
check_proxy(proxies)
|
||||
443
config.py
443
config.py
@@ -1,17 +1,32 @@
|
||||
# [step 1]>> 例如: API_KEY = "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" (此key无效)
|
||||
API_KEY = "sk-此处填API密钥" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey1,fkxxxx-api2dkey2"
|
||||
"""
|
||||
以下所有配置也都支持利用环境变量覆写,环境变量配置格式见docker-compose.yml。
|
||||
读取优先级:环境变量 > config_private.py > config.py
|
||||
--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
|
||||
All the following configurations also support using environment variables to override,
|
||||
and the environment variable configuration format can be seen in docker-compose.yml.
|
||||
Configuration reading priority: environment variable > config_private.py > config.py
|
||||
"""
|
||||
|
||||
# [step 1-1]>> ( 接入GPT等模型 ) API_KEY = "sk-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"。极少数情况下,还需要填写组织(格式如org-123456789abcdefghijklmno的),请向下翻,找 API_ORG 设置项
|
||||
API_KEY = "在此处填写APIKEY" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey3,azure-apikey4"
|
||||
|
||||
# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改
|
||||
# [step 1-2]>> ( 接入通义 qwen-max ) 接入通义千问在线大模型,api-key获取地址 https://dashscope.console.aliyun.com/
|
||||
DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
|
||||
|
||||
# [step 1-3]>> ( 接入 deepseek-reasoner, 即 deepseek-r1 ) 深度求索(DeepSeek) API KEY,默认请求地址为"https://api.deepseek.com/v1/chat/completions"
|
||||
DEEPSEEK_API_KEY = ""
|
||||
|
||||
# [step 2]>> 改为True应用代理。如果使用本地或无地域限制的大模型时,此处不修改;如果直接在海外服务器部署,此处不修改
|
||||
USE_PROXY = False
|
||||
if USE_PROXY:
|
||||
# 填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改
|
||||
# 例如 "socks5h://localhost:11284"
|
||||
# [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
|
||||
# [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上)
|
||||
# [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
|
||||
|
||||
# 代理网络的地址,打开你的*学*网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284)
|
||||
"""
|
||||
代理网络的地址,打开你的代理软件查看代理协议(socks5h / http)、地址(localhost)和端口(11284)
|
||||
填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改
|
||||
<配置教程&视频教程> https://github.com/binary-husky/gpt_academic/issues/1>
|
||||
[协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
|
||||
[地址] 填localhost或者127.0.0.1(localhost意思是代理软件安装在本机上)
|
||||
[端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
|
||||
"""
|
||||
proxies = {
|
||||
# [协议]:// [地址] :[端口]
|
||||
"http": "socks5h://localhost:11284", # 再例如 "http": "http://127.0.0.1:7890",
|
||||
@@ -20,72 +35,420 @@ if USE_PROXY:
|
||||
else:
|
||||
proxies = None
|
||||
|
||||
# [step 3]>> 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次
|
||||
# 一言以蔽之:免费用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview
|
||||
DEFAULT_WORKER_NUM = 3
|
||||
# [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
|
||||
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
|
||||
AVAIL_LLM_MODELS = ["qwen-max", "o1-mini", "o1-mini-2024-09-12", "o1", "o1-2024-12-17", "o1-preview", "o1-preview-2024-09-12",
|
||||
"gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
|
||||
"gpt-4o", "gpt-4o-mini", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
|
||||
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
||||
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-4v", "glm-3-turbo",
|
||||
"gemini-1.5-pro", "chatglm3", "chatglm4",
|
||||
"deepseek-chat", "deepseek-coder", "deepseek-reasoner"
|
||||
]
|
||||
|
||||
EMBEDDING_MODEL = "text-embedding-3-small"
|
||||
|
||||
# --- --- --- ---
|
||||
# P.S. 其他可用的模型还包括
|
||||
# AVAIL_LLM_MODELS = [
|
||||
# "glm-4-0520", "glm-4-air", "glm-4-airx", "glm-4-flash",
|
||||
# "qianfan", "deepseekcoder",
|
||||
# "spark", "sparkv2", "sparkv3", "sparkv3.5", "sparkv4",
|
||||
# "qwen-turbo", "qwen-plus", "qwen-local",
|
||||
# "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k",
|
||||
# "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125", "gpt-4o-2024-05-13"
|
||||
# "claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229", "claude-2.1", "claude-instant-1.2",
|
||||
# "moss", "llama2", "chatglm_onnx", "internlm", "jittorllms_pangualpha", "jittorllms_llama",
|
||||
# "deepseek-chat" ,"deepseek-coder",
|
||||
# "gemini-1.5-flash",
|
||||
# "yi-34b-chat-0205","yi-34b-chat-200k","yi-large","yi-medium","yi-spark","yi-large-turbo","yi-large-preview",
|
||||
# "grok-beta",
|
||||
# ]
|
||||
# --- --- --- ---
|
||||
# 此外,您还可以在接入one-api/vllm/ollama/Openroute时,
|
||||
# 使用"one-api-*","vllm-*","ollama-*","openrouter-*"前缀直接使用非标准方式接入的模型,例如
|
||||
# AVAIL_LLM_MODELS = ["one-api-claude-3-sonnet-20240229(max_token=100000)", "ollama-phi3(max_token=4096)","openrouter-openai/gpt-4o-mini","openrouter-openai/chatgpt-4o-latest"]
|
||||
# --- --- --- ---
|
||||
|
||||
|
||||
# [step 4]>> 以下配置可以优化体验,但大部分场合下并不需要修改
|
||||
# 对话窗的高度
|
||||
# --------------- 以下配置可以优化体验 ---------------
|
||||
|
||||
# 重新URL重新定向,实现更换API_URL的作用(高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!)
|
||||
# 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
|
||||
# 举例: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://reverse-proxy-url/v1/chat/completions", "http://localhost:11434/api/chat": "在这里填写您ollama的URL"}
|
||||
API_URL_REDIRECT = {}
|
||||
|
||||
|
||||
# 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次
|
||||
# 一言以蔽之:免费(5刀)用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview
|
||||
DEFAULT_WORKER_NUM = 8
|
||||
|
||||
|
||||
# 色彩主题, 可选 ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast"]
|
||||
# 更多主题, 请查阅Gradio主题商店: https://huggingface.co/spaces/gradio/theme-gallery 可选 ["Gstaff/Xkcd", "NoCrypt/Miku", ...]
|
||||
THEME = "Default"
|
||||
AVAIL_THEMES = ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast", "Gstaff/Xkcd", "NoCrypt/Miku"]
|
||||
|
||||
FONT = "Theme-Default-Font"
|
||||
AVAIL_FONTS = [
|
||||
"默认值(Theme-Default-Font)",
|
||||
"宋体(SimSun)",
|
||||
"黑体(SimHei)",
|
||||
"楷体(KaiTi)",
|
||||
"仿宋(FangSong)",
|
||||
"华文细黑(STHeiti Light)",
|
||||
"华文楷体(STKaiti)",
|
||||
"华文仿宋(STFangsong)",
|
||||
"华文宋体(STSong)",
|
||||
"华文中宋(STZhongsong)",
|
||||
"华文新魏(STXinwei)",
|
||||
"华文隶书(STLiti)",
|
||||
# 备注:以下字体需要网络支持,您可以自定义任意您喜欢的字体,如下所示,需要满足的格式为 "字体昵称(字体英文真名@字体css下载链接)"
|
||||
"思源宋体(Source Han Serif CN VF@https://chinese-fonts-cdn.deno.dev/packages/syst/dist/SourceHanSerifCN/result.css)",
|
||||
"月星楷(Moon Stars Kai HW@https://chinese-fonts-cdn.deno.dev/packages/moon-stars-kai/dist/MoonStarsKaiHW-Regular/result.css)",
|
||||
"珠圆体(MaokenZhuyuanTi@https://chinese-fonts-cdn.deno.dev/packages/mkzyt/dist/猫啃珠圆体/result.css)",
|
||||
"平方萌萌哒(PING FANG MENG MNEG DA@https://chinese-fonts-cdn.deno.dev/packages/pfmmd/dist/平方萌萌哒/result.css)",
|
||||
"Helvetica",
|
||||
"ui-sans-serif",
|
||||
"sans-serif",
|
||||
"system-ui"
|
||||
]
|
||||
|
||||
|
||||
# 默认的系统提示词(system prompt)
|
||||
INIT_SYS_PROMPT = "Serve me as a writing and programming assistant."
|
||||
|
||||
|
||||
# 对话窗的高度 (仅在LAYOUT="TOP-DOWN"时生效)
|
||||
CHATBOT_HEIGHT = 1115
|
||||
|
||||
|
||||
# 代码高亮
|
||||
CODE_HIGHLIGHT = True
|
||||
|
||||
|
||||
# 窗口布局
|
||||
LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)
|
||||
DARK_MODE = True # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)
|
||||
LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)
|
||||
|
||||
|
||||
# 暗色模式 / 亮色模式
|
||||
DARK_MODE = True
|
||||
|
||||
|
||||
# 发送请求到OpenAI后,等待多久判定为超时
|
||||
TIMEOUT_SECONDS = 30
|
||||
|
||||
|
||||
# 网页的端口, -1代表随机端口
|
||||
WEB_PORT = -1
|
||||
|
||||
|
||||
# 是否自动打开浏览器页面
|
||||
AUTO_OPEN_BROWSER = True
|
||||
|
||||
|
||||
# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制
|
||||
MAX_RETRY = 2
|
||||
|
||||
# 模型选择是 (注意: LLM_MODEL是默认选中的模型, 同时它必须被包含在AVAIL_LLM_MODELS切换列表中 )
|
||||
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
|
||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt35", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "moss", "newbing", "newbing-free", "stack-claude"]
|
||||
# P.S. 其他可用的模型还包括 ["gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "newbing-free", "jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
|
||||
# 插件分类默认选项
|
||||
DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
|
||||
|
||||
|
||||
# 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
|
||||
MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
|
||||
|
||||
|
||||
# 选择本地模型变体(只有当AVAIL_LLM_MODELS包含了对应本地模型时,才会起作用)
|
||||
# 如果你选择Qwen系列的模型,那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型
|
||||
# 也可以是具体的模型路径
|
||||
QWEN_LOCAL_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
|
||||
|
||||
|
||||
# 百度千帆(LLM_MODEL="qianfan")
|
||||
BAIDU_CLOUD_API_KEY = ''
|
||||
BAIDU_CLOUD_SECRET_KEY = ''
|
||||
BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot-4"(文心大模型4.0), "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat", "ERNIE-Speed-128K", "ERNIE-Speed-8K", "ERNIE-Lite-8K"
|
||||
|
||||
|
||||
# 如果使用ChatGLM3或ChatGLM4本地模型,请把 LLM_MODEL="chatglm3" 或LLM_MODEL="chatglm4",并在此处指定模型路径
|
||||
CHATGLM_LOCAL_MODEL_PATH = "THUDM/glm-4-9b-chat" # 例如"/home/hmp/ChatGLM3-6B/"
|
||||
|
||||
# 如果使用ChatGLM2微调模型,请把 LLM_MODEL="chatglmft",并在此处指定模型路径
|
||||
CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"
|
||||
|
||||
|
||||
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
|
||||
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
|
||||
LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
|
||||
|
||||
|
||||
# 设置gradio的并行线程数(不需要修改)
|
||||
CONCURRENT_COUNT = 100
|
||||
|
||||
|
||||
# 是否在提交时自动清空输入框
|
||||
AUTO_CLEAR_TXT = False
|
||||
|
||||
|
||||
# 加一个live2d装饰
|
||||
ADD_WAIFU = False
|
||||
|
||||
|
||||
# 设置用户名和密码(不需要修改)(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个)
|
||||
# [("username", "password"), ("username2", "password2"), ...]
|
||||
AUTHENTICATION = []
|
||||
|
||||
# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!)
|
||||
# (高危设置!通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!)
|
||||
# 格式 {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
|
||||
# 例如 API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://ai.open.com/api/conversation"}
|
||||
API_URL_REDIRECT = {}
|
||||
|
||||
# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
|
||||
# 如果需要在二级路径下运行(常规情况下,不要修改!!)
|
||||
# (举例 CUSTOM_PATH = "/gpt_academic",可以让软件运行在 http://ip:port/gpt_academic/ 下。)
|
||||
CUSTOM_PATH = "/"
|
||||
|
||||
# 如果需要使用newbing,把newbing的长长的cookie放到这里
|
||||
NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"]
|
||||
# 从现在起,如果您调用"newbing-free"模型,则无需填写NEWBING_COOKIES
|
||||
NEWBING_COOKIES = """
|
||||
your bing cookies here
|
||||
"""
|
||||
|
||||
# 如果需要使用Slack Claude,使用教程详情见 request_llm/README.md
|
||||
SLACK_CLAUDE_BOT_ID = ''
|
||||
# HTTPS 秘钥和证书(不需要修改)
|
||||
SSL_KEYFILE = ""
|
||||
SSL_CERTFILE = ""
|
||||
|
||||
|
||||
# 极少数情况下,openai的官方KEY需要伴随组织编码(格式如org-xxxxxxxxxxxxxxxxxxxxxxxx)使用
|
||||
API_ORG = ""
|
||||
|
||||
|
||||
# 如果需要使用Slack Claude,使用教程详情见 request_llms/README.md
|
||||
SLACK_CLAUDE_BOT_ID = ''
|
||||
SLACK_CLAUDE_USER_TOKEN = ''
|
||||
|
||||
|
||||
# 如果需要使用AZURE 详情请见额外文档 docs\use_azure.md
|
||||
AZURE_ENDPOINT = "https://你的api名称.openai.azure.com/"
|
||||
AZURE_API_KEY = "填入azure openai api的密钥"
|
||||
AZURE_API_VERSION = "填入api版本"
|
||||
AZURE_ENGINE = "填入ENGINE"
|
||||
# 如果需要使用AZURE(方法一:单个azure模型部署)详情请见额外文档 docs\use_azure.md
|
||||
AZURE_ENDPOINT = "https://你亲手写的api名称.openai.azure.com/"
|
||||
AZURE_API_KEY = "填入azure openai api的密钥" # 建议直接在API_KEY处填写,该选项即将被弃用
|
||||
AZURE_ENGINE = "填入你亲手写的部署名" # 读 docs\use_azure.md
|
||||
|
||||
|
||||
# 如果需要使用AZURE(方法二:多个azure模型部署+动态切换)详情请见额外文档 docs\use_azure.md
|
||||
AZURE_CFG_ARRAY = {}
|
||||
|
||||
|
||||
# 阿里云实时语音识别 配置难度较高
|
||||
# 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
|
||||
ENABLE_AUDIO = False
|
||||
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
|
||||
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
|
||||
ALIYUN_ACCESSKEY="" # (无需填写)
|
||||
ALIYUN_SECRET="" # (无需填写)
|
||||
|
||||
|
||||
# GPT-SOVITS 文本转语音服务的运行地址(将语言模型的生成文本朗读出来)
|
||||
TTS_TYPE = "EDGE_TTS" # EDGE_TTS / LOCAL_SOVITS_API / DISABLE
|
||||
GPT_SOVITS_URL = ""
|
||||
EDGE_TTS_VOICE = "zh-CN-XiaoxiaoNeural"
|
||||
|
||||
|
||||
# 接入讯飞星火大模型 https://console.xfyun.cn/services/iat
|
||||
XFYUN_APPID = "00000000"
|
||||
XFYUN_API_SECRET = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
|
||||
XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
|
||||
|
||||
|
||||
# 接入智谱大模型
|
||||
ZHIPUAI_API_KEY = ""
|
||||
ZHIPUAI_MODEL = "" # 此选项已废弃,不再需要填写
|
||||
|
||||
|
||||
# Claude API KEY
|
||||
ANTHROPIC_API_KEY = ""
|
||||
|
||||
|
||||
# 月之暗面 API KEY
|
||||
MOONSHOT_API_KEY = ""
|
||||
|
||||
|
||||
# 零一万物(Yi Model) API KEY
|
||||
YIMODEL_API_KEY = ""
|
||||
|
||||
|
||||
# 紫东太初大模型 https://ai-maas.wair.ac.cn
|
||||
TAICHU_API_KEY = ""
|
||||
|
||||
# Grok API KEY
|
||||
GROK_API_KEY = ""
|
||||
|
||||
# Mathpix 拥有执行PDF的OCR功能,但是需要注册账号
|
||||
MATHPIX_APPID = ""
|
||||
MATHPIX_APPKEY = ""
|
||||
|
||||
|
||||
# DOC2X的PDF解析服务,注册账号并获取API KEY: https://doc2x.noedgeai.com/login
|
||||
DOC2X_API_KEY = ""
|
||||
|
||||
|
||||
# 自定义API KEY格式
|
||||
CUSTOM_API_KEY_PATTERN = ""
|
||||
|
||||
|
||||
# Google Gemini API-Key
|
||||
GEMINI_API_KEY = ''
|
||||
|
||||
|
||||
# HUGGINGFACE的TOKEN,下载LLAMA时起作用 https://huggingface.co/docs/hub/security-tokens
|
||||
HUGGINGFACE_ACCESS_TOKEN = "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"
|
||||
|
||||
|
||||
# GROBID服务器地址(填写多个可以均衡负载),用于高质量地读取PDF文档
|
||||
# 获取方法:复制以下空间https://huggingface.co/spaces/qingxu98/grobid,设为public,然后GROBID_URL = "https://(你的hf用户名如qingxu98)-(你的填写的空间名如grobid).hf.space"
|
||||
GROBID_URLS = [
|
||||
"https://qingxu98-grobid.hf.space","https://qingxu98-grobid2.hf.space","https://qingxu98-grobid3.hf.space",
|
||||
"https://qingxu98-grobid4.hf.space","https://qingxu98-grobid5.hf.space", "https://qingxu98-grobid6.hf.space",
|
||||
"https://qingxu98-grobid7.hf.space", "https://qingxu98-grobid8.hf.space",
|
||||
]
|
||||
|
||||
|
||||
# Searxng互联网检索服务(这是一个huggingface空间,请前往huggingface复制该空间,然后把自己新的空间地址填在这里)
|
||||
SEARXNG_URLS = [ f"https://kaletianlre-beardvs{i}dd.hf.space/" for i in range(1,5) ]
|
||||
|
||||
|
||||
# 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性,默认关闭
|
||||
ALLOW_RESET_CONFIG = False
|
||||
|
||||
|
||||
# 在使用AutoGen插件时,是否使用Docker容器运行代码
|
||||
AUTOGEN_USE_DOCKER = False
|
||||
|
||||
|
||||
# 临时的上传文件夹位置,请尽量不要修改
|
||||
PATH_PRIVATE_UPLOAD = "private_upload"
|
||||
|
||||
|
||||
# 日志文件夹的位置,请尽量不要修改
|
||||
PATH_LOGGING = "gpt_log"
|
||||
|
||||
|
||||
# 存储翻译好的arxiv论文的路径,请尽量不要修改
|
||||
ARXIV_CACHE_DIR = "gpt_log/arxiv_cache"
|
||||
|
||||
|
||||
# 除了连接OpenAI之外,还有哪些场合允许使用代理,请尽量不要修改
|
||||
WHEN_TO_USE_PROXY = ["Connect_OpenAI", "Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
|
||||
"Warmup_Modules", "Nougat_Download", "AutoGen", "Connect_OpenAI_Embedding"]
|
||||
|
||||
|
||||
# 启用插件热加载
|
||||
PLUGIN_HOT_RELOAD = False
|
||||
|
||||
|
||||
# 自定义按钮的最大数量限制
|
||||
NUM_CUSTOM_BASIC_BTN = 4
|
||||
|
||||
|
||||
# 媒体智能体的服务地址(这是一个huggingface空间,请前往huggingface复制该空间,然后把自己新的空间地址填在这里)
|
||||
DAAS_SERVER_URLS = [ f"https://niuziniu-biligpt{i}.hf.space/stream" for i in range(1,5) ]
|
||||
|
||||
|
||||
# 在互联网搜索组件中,负责将搜索结果整理成干净的Markdown
|
||||
JINA_API_KEY = ""
|
||||
|
||||
"""
|
||||
--------------- 配置关联关系说明 ---------------
|
||||
|
||||
在线大模型配置关联关系示意图
|
||||
│
|
||||
├── "gpt-3.5-turbo" 等openai模型
|
||||
│ ├── API_KEY
|
||||
│ ├── CUSTOM_API_KEY_PATTERN(不常用)
|
||||
│ ├── API_ORG(不常用)
|
||||
│ └── API_URL_REDIRECT(不常用)
|
||||
│
|
||||
├── "azure-gpt-3.5" 等azure模型(单个azure模型,不需要动态切换)
|
||||
│ ├── API_KEY
|
||||
│ ├── AZURE_ENDPOINT
|
||||
│ ├── AZURE_API_KEY
|
||||
│ ├── AZURE_ENGINE
|
||||
│ └── API_URL_REDIRECT
|
||||
│
|
||||
├── "azure-gpt-3.5" 等azure模型(多个azure模型,需要动态切换,高优先级)
|
||||
│ └── AZURE_CFG_ARRAY
|
||||
│
|
||||
├── "spark" 星火认知大模型 spark & sparkv2
|
||||
│ ├── XFYUN_APPID
|
||||
│ ├── XFYUN_API_SECRET
|
||||
│ └── XFYUN_API_KEY
|
||||
│
|
||||
├── "claude-3-opus-20240229" 等claude模型
|
||||
│ └── ANTHROPIC_API_KEY
|
||||
│
|
||||
├── "stack-claude"
|
||||
│ ├── SLACK_CLAUDE_BOT_ID
|
||||
│ └── SLACK_CLAUDE_USER_TOKEN
|
||||
│
|
||||
├── "qianfan" 百度千帆大模型库
|
||||
│ ├── BAIDU_CLOUD_QIANFAN_MODEL
|
||||
│ ├── BAIDU_CLOUD_API_KEY
|
||||
│ └── BAIDU_CLOUD_SECRET_KEY
|
||||
│
|
||||
├── "glm-4", "glm-3-turbo", "zhipuai" 智谱AI大模型
|
||||
│ └── ZHIPUAI_API_KEY
|
||||
│
|
||||
├── "yi-34b-chat-0205", "yi-34b-chat-200k" 等零一万物(Yi Model)大模型
|
||||
│ └── YIMODEL_API_KEY
|
||||
│
|
||||
├── "qwen-turbo" 等通义千问大模型
|
||||
│ └── DASHSCOPE_API_KEY
|
||||
│
|
||||
├── "Gemini"
|
||||
│ └── GEMINI_API_KEY
|
||||
│
|
||||
└── "one-api-...(max_token=...)" 用一种更方便的方式接入one-api多模型管理界面
|
||||
├── AVAIL_LLM_MODELS
|
||||
├── API_KEY
|
||||
└── API_URL_REDIRECT
|
||||
|
||||
|
||||
本地大模型示意图
|
||||
│
|
||||
├── "chatglm4"
|
||||
├── "chatglm3"
|
||||
├── "chatglm"
|
||||
├── "chatglm_onnx"
|
||||
├── "chatglmft"
|
||||
├── "internlm"
|
||||
├── "moss"
|
||||
├── "jittorllms_pangualpha"
|
||||
├── "jittorllms_llama"
|
||||
├── "deepseekcoder"
|
||||
├── "qwen-local"
|
||||
├── RWKV的支持见Wiki
|
||||
└── "llama2"
|
||||
|
||||
|
||||
用户图形界面布局依赖关系示意图
|
||||
│
|
||||
├── CHATBOT_HEIGHT 对话窗的高度
|
||||
├── CODE_HIGHLIGHT 代码高亮
|
||||
├── LAYOUT 窗口布局
|
||||
├── DARK_MODE 暗色模式 / 亮色模式
|
||||
├── DEFAULT_FN_GROUPS 插件分类默认选项
|
||||
├── THEME 色彩主题
|
||||
├── AUTO_CLEAR_TXT 是否在提交时自动清空输入框
|
||||
├── ADD_WAIFU 加一个live2d装饰
|
||||
└── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性
|
||||
|
||||
|
||||
插件在线服务配置依赖关系示意图
|
||||
│
|
||||
├── 互联网检索
|
||||
│ └── SEARXNG_URLS
|
||||
│
|
||||
├── 语音功能
|
||||
│ ├── ENABLE_AUDIO
|
||||
│ ├── ALIYUN_TOKEN
|
||||
│ ├── ALIYUN_APPKEY
|
||||
│ ├── ALIYUN_ACCESSKEY
|
||||
│ └── ALIYUN_SECRET
|
||||
│
|
||||
└── PDF文档精准解析
|
||||
├── GROBID_URLS
|
||||
├── MATHPIX_APPID
|
||||
└── MATHPIX_APPKEY
|
||||
|
||||
|
||||
"""
|
||||
|
||||
444
config_private.py
Normal file
444
config_private.py
Normal file
@@ -0,0 +1,444 @@
|
||||
"""
|
||||
以下所有配置也都支持利用环境变量覆写,环境变量配置格式见docker-compose.yml。
|
||||
读取优先级:环境变量 > config_private.py > config.py
|
||||
--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
|
||||
All the following configurations also support using environment variables to override,
|
||||
and the environment variable configuration format can be seen in docker-compose.yml.
|
||||
Configuration reading priority: environment variable > config_private.py > config.py
|
||||
"""
|
||||
|
||||
# [step 1-1]>> ( 接入GPT等模型 ) API_KEY = "sk-123456789xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx123456789"。极少数情况下,还需要填写组织(格式如org-123456789abcdefghijklmno的),请向下翻,找 API_ORG 设置项
|
||||
API_KEY = "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" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey1,fkxxxx-api2dkey2"
|
||||
|
||||
# [step 1-2]>> ( 接入通义 qwen-max ) 接入通义千问在线大模型,api-key获取地址 https://dashscope.console.aliyun.com/
|
||||
DASHSCOPE_API_KEY = "" # 阿里灵积云API_KEY
|
||||
|
||||
# [step 1-3]>> ( 接入 deepseek-reasoner, 即 deepseek-r1 ) 深度求索(DeepSeek) API KEY,默认请求地址为"https://api.deepseek.com/v1/chat/completions"
|
||||
DEEPSEEK_API_KEY = "sk-d99b8cc6b7414cc88a5d950a3ff7585e"
|
||||
|
||||
# [step 2]>> 改为True应用代理。如果使用本地或无地域限制的大模型时,此处不修改;如果直接在海外服务器部署,此处不修改
|
||||
USE_PROXY = True
|
||||
if USE_PROXY:
|
||||
proxies = {
|
||||
"http":"socks5h://192.168.8.9:1070", # 再例如 "http": "http://127.0.0.1:7890",
|
||||
"https":"socks5h://192.168.8.9:1070", # 再例如 "https": "http://127.0.0.1:7890",
|
||||
}
|
||||
else:
|
||||
proxies = None
|
||||
DEFAULT_WORKER_NUM = 256
|
||||
|
||||
# [step 3]>> 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
|
||||
LLM_MODEL = "gpt-4-32k" # 可选 ↓↓↓
|
||||
AVAIL_LLM_MODELS = ["deepseek-chat", "deepseek-coder", "deepseek-reasoner",
|
||||
"gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
|
||||
"gpt-4o", "gpt-4o-mini", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
|
||||
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
||||
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-4v", "glm-3-turbo",
|
||||
"gemini-1.5-pro", "chatglm3", "chatglm4",
|
||||
]
|
||||
|
||||
EMBEDDING_MODEL = "text-embedding-3-small"
|
||||
|
||||
# --- --- --- ---
|
||||
# P.S. 其他可用的模型还包括
|
||||
# AVAIL_LLM_MODELS = [
|
||||
# "glm-4-0520", "glm-4-air", "glm-4-airx", "glm-4-flash",
|
||||
# "qianfan", "deepseekcoder",
|
||||
# "spark", "sparkv2", "sparkv3", "sparkv3.5", "sparkv4",
|
||||
# "qwen-turbo", "qwen-plus", "qwen-local",
|
||||
# "moonshot-v1-128k", "moonshot-v1-32k", "moonshot-v1-8k",
|
||||
# "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0125", "gpt-4o-2024-05-13"
|
||||
# "claude-3-haiku-20240307","claude-3-sonnet-20240229","claude-3-opus-20240229", "claude-2.1", "claude-instant-1.2",
|
||||
# "moss", "llama2", "chatglm_onnx", "internlm", "jittorllms_pangualpha", "jittorllms_llama",
|
||||
# "deepseek-chat" ,"deepseek-coder",
|
||||
# "gemini-1.5-flash",
|
||||
# "yi-34b-chat-0205","yi-34b-chat-200k","yi-large","yi-medium","yi-spark","yi-large-turbo","yi-large-preview",
|
||||
# "grok-beta",
|
||||
# ]
|
||||
# --- --- --- ---
|
||||
# 此外,您还可以在接入one-api/vllm/ollama/Openroute时,
|
||||
# 使用"one-api-*","vllm-*","ollama-*","openrouter-*"前缀直接使用非标准方式接入的模型,例如
|
||||
# AVAIL_LLM_MODELS = ["one-api-claude-3-sonnet-20240229(max_token=100000)", "ollama-phi3(max_token=4096)","openrouter-openai/gpt-4o-mini","openrouter-openai/chatgpt-4o-latest"]
|
||||
# --- --- --- ---
|
||||
|
||||
|
||||
# --------------- 以下配置可以优化体验 ---------------
|
||||
|
||||
# 重新URL重新定向,实现更换API_URL的作用(高危设置! 常规情况下不要修改! 通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!)
|
||||
# 格式: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
|
||||
# 举例: API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://reverse-proxy-url/v1/chat/completions", "http://localhost:11434/api/chat": "在这里填写您ollama的URL"}
|
||||
API_URL_REDIRECT = {}
|
||||
|
||||
|
||||
# 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次
|
||||
# 一言以蔽之:免费(5刀)用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview
|
||||
DEFAULT_WORKER_NUM = 64
|
||||
|
||||
|
||||
# 色彩主题, 可选 ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast"]
|
||||
# 更多主题, 请查阅Gradio主题商店: https://huggingface.co/spaces/gradio/theme-gallery 可选 ["Gstaff/Xkcd", "NoCrypt/Miku", ...]
|
||||
THEME = "Default"
|
||||
AVAIL_THEMES = ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast", "Gstaff/Xkcd", "NoCrypt/Miku"]
|
||||
|
||||
FONT = "Theme-Default-Font"
|
||||
AVAIL_FONTS = [
|
||||
"默认值(Theme-Default-Font)",
|
||||
"宋体(SimSun)",
|
||||
"黑体(SimHei)",
|
||||
"楷体(KaiTi)",
|
||||
"仿宋(FangSong)",
|
||||
"华文细黑(STHeiti Light)",
|
||||
"华文楷体(STKaiti)",
|
||||
"华文仿宋(STFangsong)",
|
||||
"华文宋体(STSong)",
|
||||
"华文中宋(STZhongsong)",
|
||||
"华文新魏(STXinwei)",
|
||||
"华文隶书(STLiti)",
|
||||
"思源宋体(Source Han Serif CN VF@https://chinese-fonts-cdn.deno.dev/packages/syst/dist/SourceHanSerifCN/result.css)",
|
||||
"月星楷(Moon Stars Kai HW@https://chinese-fonts-cdn.deno.dev/packages/moon-stars-kai/dist/MoonStarsKaiHW-Regular/result.css)",
|
||||
"珠圆体(MaokenZhuyuanTi@https://chinese-fonts-cdn.deno.dev/packages/mkzyt/dist/猫啃珠圆体/result.css)",
|
||||
"平方萌萌哒(PING FANG MENG MNEG DA@https://chinese-fonts-cdn.deno.dev/packages/pfmmd/dist/平方萌萌哒/result.css)",
|
||||
"Helvetica",
|
||||
"ui-sans-serif",
|
||||
"sans-serif",
|
||||
"system-ui"
|
||||
]
|
||||
|
||||
|
||||
# 默认的系统提示词(system prompt)
|
||||
INIT_SYS_PROMPT = "Serve me as a writing and programming assistant."
|
||||
|
||||
|
||||
# 对话窗的高度 (仅在LAYOUT="TOP-DOWN"时生效)
|
||||
CHATBOT_HEIGHT = 1115
|
||||
|
||||
|
||||
# 代码高亮
|
||||
CODE_HIGHLIGHT = True
|
||||
|
||||
|
||||
# 窗口布局
|
||||
LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局)
|
||||
|
||||
|
||||
# 暗色模式 / 亮色模式
|
||||
DARK_MODE = True
|
||||
|
||||
|
||||
# 发送请求到OpenAI后,等待多久判定为超时
|
||||
TIMEOUT_SECONDS = 60
|
||||
|
||||
|
||||
# 网页的端口, -1代表随机端口
|
||||
WEB_PORT = 19998
|
||||
|
||||
# 是否自动打开浏览器页面
|
||||
AUTO_OPEN_BROWSER = True
|
||||
|
||||
|
||||
# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制
|
||||
MAX_RETRY = 5
|
||||
|
||||
|
||||
# 插件分类默认选项
|
||||
DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
|
||||
|
||||
|
||||
# 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
|
||||
MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
|
||||
|
||||
|
||||
# 选择本地模型变体(只有当AVAIL_LLM_MODELS包含了对应本地模型时,才会起作用)
|
||||
# 如果你选择Qwen系列的模型,那么请在下面的QWEN_MODEL_SELECTION中指定具体的模型
|
||||
# 也可以是具体的模型路径
|
||||
QWEN_LOCAL_MODEL_SELECTION = "Qwen/Qwen-1_8B-Chat-Int8"
|
||||
|
||||
|
||||
# 百度千帆(LLM_MODEL="qianfan")
|
||||
BAIDU_CLOUD_API_KEY = ''
|
||||
BAIDU_CLOUD_SECRET_KEY = ''
|
||||
BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot-4"(文心大模型4.0), "ERNIE-Bot"(文心一言), "ERNIE-Bot-turbo", "BLOOMZ-7B", "Llama-2-70B-Chat", "Llama-2-13B-Chat", "Llama-2-7B-Chat", "ERNIE-Speed-128K", "ERNIE-Speed-8K", "ERNIE-Lite-8K"
|
||||
|
||||
|
||||
# 如果使用ChatGLM3或ChatGLM4本地模型,请把 LLM_MODEL="chatglm3" 或LLM_MODEL="chatglm4",并在此处指定模型路径
|
||||
CHATGLM_LOCAL_MODEL_PATH = "THUDM/glm-4-9b-chat" # 例如"/home/hmp/ChatGLM3-6B/"
|
||||
|
||||
# 如果使用ChatGLM2微调模型,请把 LLM_MODEL="chatglmft",并在此处指定模型路径
|
||||
CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"
|
||||
|
||||
|
||||
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
|
||||
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
|
||||
LOCAL_MODEL_QUANT = "FP16" # 默认 "FP16" "INT4" 启用量化INT4版本 "INT8" 启用量化INT8版本
|
||||
|
||||
|
||||
# 设置gradio的并行线程数(不需要修改)
|
||||
CONCURRENT_COUNT = 100
|
||||
|
||||
|
||||
# 是否在提交时自动清空输入框
|
||||
AUTO_CLEAR_TXT = False
|
||||
|
||||
|
||||
# 加一个live2d装饰
|
||||
ADD_WAIFU = False
|
||||
|
||||
|
||||
# 设置用户名和密码(不需要修改)(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个)
|
||||
# [("username", "password"), ("username2", "password2"), ...]
|
||||
AUTHENTICATION = [("van", "L807878712"),("林", "L807878712"),("源", "L807878712"),("欣", "L807878712"),("z", "czh123456789")]
|
||||
|
||||
|
||||
# 如果需要在二级路径下运行(常规情况下,不要修改!!)
|
||||
# (举例 CUSTOM_PATH = "/gpt_academic",可以让软件运行在 http://ip:port/gpt_academic/ 下。)
|
||||
CUSTOM_PATH = "/"
|
||||
|
||||
|
||||
# HTTPS 秘钥和证书(不需要修改)
|
||||
SSL_KEYFILE = ""
|
||||
SSL_CERTFILE = ""
|
||||
|
||||
|
||||
# 极少数情况下,openai的官方KEY需要伴随组织编码(格式如org-xxxxxxxxxxxxxxxxxxxxxxxx)使用
|
||||
API_ORG = ""
|
||||
|
||||
|
||||
# 如果需要使用Slack Claude,使用教程详情见 request_llms/README.md
|
||||
SLACK_CLAUDE_BOT_ID = ''
|
||||
SLACK_CLAUDE_USER_TOKEN = ''
|
||||
|
||||
|
||||
# 如果需要使用AZURE(方法一:单个azure模型部署)详情请见额外文档 docs\use_azure.md
|
||||
AZURE_ENDPOINT = "https://你亲手写的api名称.openai.azure.com/"
|
||||
AZURE_API_KEY = "填入azure openai api的密钥" # 建议直接在API_KEY处填写,该选项即将被弃用
|
||||
AZURE_ENGINE = "填入你亲手写的部署名" # 读 docs\use_azure.md
|
||||
|
||||
|
||||
# 如果需要使用AZURE(方法二:多个azure模型部署+动态切换)详情请见额外文档 docs\use_azure.md
|
||||
AZURE_CFG_ARRAY = {}
|
||||
|
||||
|
||||
# 阿里云实时语音识别 配置难度较高
|
||||
# 参考 https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md
|
||||
ENABLE_AUDIO = False
|
||||
ALIYUN_TOKEN="" # 例如 f37f30e0f9934c34a992f6f64f7eba4f
|
||||
ALIYUN_APPKEY="" # 例如 RoPlZrM88DnAFkZK
|
||||
ALIYUN_ACCESSKEY="" # (无需填写)
|
||||
ALIYUN_SECRET="" # (无需填写)
|
||||
|
||||
|
||||
# GPT-SOVITS 文本转语音服务的运行地址(将语言模型的生成文本朗读出来)
|
||||
TTS_TYPE = "DISABLE" # EDGE_TTS / LOCAL_SOVITS_API / DISABLE
|
||||
GPT_SOVITS_URL = ""
|
||||
EDGE_TTS_VOICE = "zh-CN-XiaoxiaoNeural"
|
||||
|
||||
|
||||
# 接入讯飞星火大模型 https://console.xfyun.cn/services/iat
|
||||
XFYUN_APPID = "00000000"
|
||||
XFYUN_API_SECRET = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
|
||||
XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
|
||||
|
||||
|
||||
# 接入智谱大模型
|
||||
ZHIPUAI_API_KEY = ""
|
||||
ZHIPUAI_MODEL = "" # 此选项已废弃,不再需要填写
|
||||
|
||||
|
||||
# Claude API KEY
|
||||
ANTHROPIC_API_KEY = ""
|
||||
|
||||
|
||||
# 月之暗面 API KEY
|
||||
MOONSHOT_API_KEY = ""
|
||||
|
||||
|
||||
# 零一万物(Yi Model) API KEY
|
||||
YIMODEL_API_KEY = ""
|
||||
|
||||
|
||||
# 紫东太初大模型 https://ai-maas.wair.ac.cn
|
||||
TAICHU_API_KEY = ""
|
||||
|
||||
# Grok API KEY
|
||||
GROK_API_KEY = ""
|
||||
|
||||
# Mathpix 拥有执行PDF的OCR功能,但是需要注册账号
|
||||
MATHPIX_APPID = ""
|
||||
MATHPIX_APPKEY = ""
|
||||
|
||||
|
||||
# DOC2X的PDF解析服务,注册账号并获取API KEY: https://doc2x.noedgeai.com/login
|
||||
DOC2X_API_KEY = ""
|
||||
|
||||
|
||||
# 自定义API KEY格式
|
||||
CUSTOM_API_KEY_PATTERN = ""
|
||||
|
||||
|
||||
# Google Gemini API-Key
|
||||
GEMINI_API_KEY = ''
|
||||
|
||||
|
||||
# HUGGINGFACE的TOKEN,下载LLAMA时起作用 https://huggingface.co/docs/hub/security-tokens
|
||||
HUGGINGFACE_ACCESS_TOKEN = "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"
|
||||
|
||||
|
||||
# GROBID服务器地址(填写多个可以均衡负载),用于高质量地读取PDF文档
|
||||
# 获取方法:复制以下空间https://huggingface.co/spaces/qingxu98/grobid,设为public,然后GROBID_URL = "https://(你的hf用户名如qingxu98)-(你的填写的空间名如grobid).hf.space"
|
||||
GROBID_URLS = [
|
||||
"https://qingxu98-grobid.hf.space","https://qingxu98-grobid2.hf.space","https://qingxu98-grobid3.hf.space",
|
||||
"https://qingxu98-grobid4.hf.space","https://qingxu98-grobid5.hf.space", "https://qingxu98-grobid6.hf.space",
|
||||
"https://qingxu98-grobid7.hf.space", "https://qingxu98-grobid8.hf.space",
|
||||
]
|
||||
|
||||
|
||||
# Searxng互联网检索服务(这是一个huggingface空间,请前往huggingface复制该空间,然后把自己新的空间地址填在这里)
|
||||
SEARXNG_URLS = [ f"https://kaletianlre-beardvs{i}dd.hf.space/" for i in range(1,5) ]
|
||||
|
||||
|
||||
# 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性,默认关闭
|
||||
ALLOW_RESET_CONFIG = False
|
||||
|
||||
|
||||
# 在使用AutoGen插件时,是否使用Docker容器运行代码
|
||||
AUTOGEN_USE_DOCKER = False
|
||||
|
||||
|
||||
# 临时的上传文件夹位置,请尽量不要修改
|
||||
PATH_PRIVATE_UPLOAD = "private_upload"
|
||||
|
||||
|
||||
# 日志文件夹的位置,请尽量不要修改
|
||||
PATH_LOGGING = "gpt_log"
|
||||
|
||||
|
||||
# 存储翻译好的arxiv论文的路径,请尽量不要修改
|
||||
ARXIV_CACHE_DIR = "gpt_log/arxiv_cache"
|
||||
|
||||
|
||||
# 除了连接OpenAI之外,还有哪些场合允许使用代理,请尽量不要修改
|
||||
WHEN_TO_USE_PROXY = ["Connect_OpenAI", "Download_LLM", "Download_Gradio_Theme", "Connect_Grobid",
|
||||
"Warmup_Modules", "Nougat_Download", "AutoGen", "Connect_OpenAI_Embedding"]
|
||||
|
||||
|
||||
# 启用插件热加载
|
||||
PLUGIN_HOT_RELOAD = False
|
||||
|
||||
|
||||
# 自定义按钮的最大数量限制
|
||||
NUM_CUSTOM_BASIC_BTN = 4
|
||||
|
||||
|
||||
# 媒体智能体的服务地址(这是一个huggingface空间,请前往huggingface复制该空间,然后把自己新的空间地址填在这里)
|
||||
DAAS_SERVER_URLS = [ f"https://niuziniu-biligpt{i}.hf.space/stream" for i in range(1,5) ]
|
||||
|
||||
|
||||
|
||||
"""
|
||||
--------------- 配置关联关系说明 ---------------
|
||||
|
||||
在线大模型配置关联关系示意图
|
||||
│
|
||||
├── "gpt-3.5-turbo" 等openai模型
|
||||
│ ├── API_KEY
|
||||
│ ├── CUSTOM_API_KEY_PATTERN(不常用)
|
||||
│ ├── API_ORG(不常用)
|
||||
│ └── API_URL_REDIRECT(不常用)
|
||||
│
|
||||
├── "azure-gpt-3.5" 等azure模型(单个azure模型,不需要动态切换)
|
||||
│ ├── API_KEY
|
||||
│ ├── AZURE_ENDPOINT
|
||||
│ ├── AZURE_API_KEY
|
||||
│ ├── AZURE_ENGINE
|
||||
│ └── API_URL_REDIRECT
|
||||
│
|
||||
├── "azure-gpt-3.5" 等azure模型(多个azure模型,需要动态切换,高优先级)
|
||||
│ └── AZURE_CFG_ARRAY
|
||||
│
|
||||
├── "spark" 星火认知大模型 spark & sparkv2
|
||||
│ ├── XFYUN_APPID
|
||||
│ ├── XFYUN_API_SECRET
|
||||
│ └── XFYUN_API_KEY
|
||||
│
|
||||
├── "claude-3-opus-20240229" 等claude模型
|
||||
│ └── ANTHROPIC_API_KEY
|
||||
│
|
||||
├── "stack-claude"
|
||||
│ ├── SLACK_CLAUDE_BOT_ID
|
||||
│ └── SLACK_CLAUDE_USER_TOKEN
|
||||
│
|
||||
├── "qianfan" 百度千帆大模型库
|
||||
│ ├── BAIDU_CLOUD_QIANFAN_MODEL
|
||||
│ ├── BAIDU_CLOUD_API_KEY
|
||||
│ └── BAIDU_CLOUD_SECRET_KEY
|
||||
│
|
||||
├── "glm-4", "glm-3-turbo", "zhipuai" 智谱AI大模型
|
||||
│ └── ZHIPUAI_API_KEY
|
||||
│
|
||||
├── "yi-34b-chat-0205", "yi-34b-chat-200k" 等零一万物(Yi Model)大模型
|
||||
│ └── YIMODEL_API_KEY
|
||||
│
|
||||
├── "qwen-turbo" 等通义千问大模型
|
||||
│ └── DASHSCOPE_API_KEY
|
||||
│
|
||||
├── "Gemini"
|
||||
│ └── GEMINI_API_KEY
|
||||
│
|
||||
└── "one-api-...(max_token=...)" 用一种更方便的方式接入one-api多模型管理界面
|
||||
├── AVAIL_LLM_MODELS
|
||||
├── API_KEY
|
||||
└── API_URL_REDIRECT
|
||||
|
||||
|
||||
本地大模型示意图
|
||||
│
|
||||
├── "chatglm4"
|
||||
├── "chatglm3"
|
||||
├── "chatglm"
|
||||
├── "chatglm_onnx"
|
||||
├── "chatglmft"
|
||||
├── "internlm"
|
||||
├── "moss"
|
||||
├── "jittorllms_pangualpha"
|
||||
├── "jittorllms_llama"
|
||||
├── "deepseekcoder"
|
||||
├── "qwen-local"
|
||||
├── RWKV的支持见Wiki
|
||||
└── "llama2"
|
||||
|
||||
|
||||
用户图形界面布局依赖关系示意图
|
||||
│
|
||||
├── CHATBOT_HEIGHT 对话窗的高度
|
||||
├── CODE_HIGHLIGHT 代码高亮
|
||||
├── LAYOUT 窗口布局
|
||||
├── DARK_MODE 暗色模式 / 亮色模式
|
||||
├── DEFAULT_FN_GROUPS 插件分类默认选项
|
||||
├── THEME 色彩主题
|
||||
├── AUTO_CLEAR_TXT 是否在提交时自动清空输入框
|
||||
├── ADD_WAIFU 加一个live2d装饰
|
||||
└── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置,该功能具有一定的危险性
|
||||
|
||||
|
||||
插件在线服务配置依赖关系示意图
|
||||
│
|
||||
├── 互联网检索
|
||||
│ └── SEARXNG_URLS
|
||||
│
|
||||
├── 语音功能
|
||||
│ ├── ENABLE_AUDIO
|
||||
│ ├── ALIYUN_TOKEN
|
||||
│ ├── ALIYUN_APPKEY
|
||||
│ ├── ALIYUN_ACCESSKEY
|
||||
│ └── ALIYUN_SECRET
|
||||
│
|
||||
└── PDF文档精准解析
|
||||
├── GROBID_URLS
|
||||
├── MATHPIX_APPID
|
||||
└── MATHPIX_APPKEY
|
||||
|
||||
|
||||
"""
|
||||
|
||||
|
||||
|
||||
@@ -1,78 +1,175 @@
|
||||
# 'primary' 颜色对应 theme.py 中的 primary_hue
|
||||
# 'secondary' 颜色对应 theme.py 中的 neutral_hue
|
||||
# 'stop' 颜色对应 theme.py 中的 color_er
|
||||
# 默认按钮颜色是 secondary
|
||||
import importlib
|
||||
from toolbox import clear_line_break
|
||||
|
||||
from toolbox import apply_gpt_academic_string_mask_langbased
|
||||
from toolbox import build_gpt_academic_masked_string_langbased
|
||||
from textwrap import dedent
|
||||
|
||||
def get_core_functions():
|
||||
return {
|
||||
"英语学术润色": {
|
||||
# 前言
|
||||
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
|
||||
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
|
||||
r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
|
||||
# 后语
|
||||
|
||||
"学术语料润色": {
|
||||
# [1*] 前缀字符串,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等。
|
||||
# 这里填一个提示词字符串就行了,这里为了区分中英文情景搞复杂了一点
|
||||
"Prefix": build_gpt_academic_masked_string_langbased(
|
||||
text_show_english=
|
||||
r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, "
|
||||
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. "
|
||||
r"Firstly, you should provide the polished paragraph (in English). "
|
||||
r"Secondly, you should list all your modification and explain the reasons to do so in markdown table.",
|
||||
text_show_chinese=
|
||||
r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性,"
|
||||
r"同时分解长句,减少重复,并提供改进建议。请先提供文本的更正版本,然后在markdown表格中列出修改的内容,并给出修改的理由:"
|
||||
) + "\n\n",
|
||||
# [2*] 后缀字符串,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
||||
"Suffix": r"",
|
||||
"Color": r"secondary", # 按钮颜色
|
||||
# [3] 按钮颜色 (可选参数,默认 secondary)
|
||||
"Color": r"secondary",
|
||||
# [4] 按钮是否可见 (可选参数,默认 True,即可见)
|
||||
"Visible": True,
|
||||
# [5] 是否在触发时清除历史 (可选参数,默认 False,即不处理之前的对话历史)
|
||||
"AutoClearHistory": False,
|
||||
# [6] 文本预处理 (可选参数,默认 None,举例:写个函数移除所有的换行符)
|
||||
"PreProcess": None,
|
||||
# [7] 模型选择 (可选参数。如不设置,则使用当前全局模型;如设置,则用指定模型覆盖全局模型。)
|
||||
# "ModelOverride": "gpt-3.5-turbo", # 主要用途:强制点击此基础功能按钮时,使用指定的模型。
|
||||
},
|
||||
"中文学术润色": {
|
||||
"Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
|
||||
r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n",
|
||||
"Suffix": r"",
|
||||
|
||||
|
||||
"总结绘制脑图": {
|
||||
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
|
||||
"Prefix": '''"""\n\n''',
|
||||
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来
|
||||
"Suffix":
|
||||
# dedent() 函数用于去除多行字符串的缩进
|
||||
dedent("\n\n"+r'''
|
||||
"""
|
||||
|
||||
使用mermaid flowchart对以上文本进行总结,概括上述段落的内容以及内在逻辑关系,例如:
|
||||
|
||||
以下是对以上文本的总结,以mermaid flowchart的形式展示:
|
||||
```mermaid
|
||||
flowchart LR
|
||||
A["节点名1"] --> B("节点名2")
|
||||
B --> C{"节点名3"}
|
||||
C --> D["节点名4"]
|
||||
C --> |"箭头名1"| E["节点名5"]
|
||||
C --> |"箭头名2"| F["节点名6"]
|
||||
```
|
||||
|
||||
注意:
|
||||
(1)使用中文
|
||||
(2)节点名字使用引号包裹,如["Laptop"]
|
||||
(3)`|` 和 `"`之间不要存在空格
|
||||
(4)根据情况选择flowchart LR(从左到右)或者flowchart TD(从上到下)
|
||||
'''),
|
||||
},
|
||||
|
||||
|
||||
"查找语法错误": {
|
||||
"Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " +
|
||||
r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." +
|
||||
r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " +
|
||||
r"put the original text the first column, " +
|
||||
r"put the corrected text in the second column and highlight the key words you fixed.""\n"
|
||||
"Prefix": r"Help me ensure that the grammar and the spelling is correct. "
|
||||
r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good. "
|
||||
r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, "
|
||||
r"put the original text the first column, "
|
||||
r"put the corrected text in the second column and highlight the key words you fixed. "
|
||||
r"Finally, please provide the proofreaded text.""\n\n"
|
||||
r"Example:""\n"
|
||||
r"Paragraph: How is you? Do you knows what is it?""\n"
|
||||
r"| Original sentence | Corrected sentence |""\n"
|
||||
r"| :--- | :--- |""\n"
|
||||
r"| How **is** you? | How **are** you? |""\n"
|
||||
r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n"
|
||||
r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n\n"
|
||||
r"Below is a paragraph from an academic paper. "
|
||||
r"You need to report all grammar and spelling mistakes as the example before."
|
||||
+ "\n\n",
|
||||
"Suffix": r"",
|
||||
"PreProcess": clear_line_break, # 预处理:清除换行符
|
||||
},
|
||||
|
||||
|
||||
"中译英": {
|
||||
"Prefix": r"Please translate following sentence to English:" + "\n\n",
|
||||
"Suffix": r"",
|
||||
},
|
||||
"学术中英互译": {
|
||||
"Prefix": r"I want you to act as a scientific English-Chinese translator, " +
|
||||
r"I will provide you with some paragraphs in one language " +
|
||||
r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
|
||||
r"Do not repeat the original provided paragraphs after translation. " +
|
||||
r"You should use artificial intelligence tools, " +
|
||||
r"such as natural language processing, and rhetorical knowledge " +
|
||||
r"and experience about effective writing techniques to reply. " +
|
||||
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
|
||||
"Suffix": "",
|
||||
"Color": "secondary",
|
||||
|
||||
|
||||
"学术英中互译": {
|
||||
"Prefix": build_gpt_academic_masked_string_langbased(
|
||||
text_show_chinese=
|
||||
r"I want you to act as a scientific English-Chinese translator, "
|
||||
r"I will provide you with some paragraphs in one language "
|
||||
r"and your task is to accurately and academically translate the paragraphs only into the other language. "
|
||||
r"Do not repeat the original provided paragraphs after translation. "
|
||||
r"You should use artificial intelligence tools, "
|
||||
r"such as natural language processing, and rhetorical knowledge "
|
||||
r"and experience about effective writing techniques to reply. "
|
||||
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:",
|
||||
text_show_english=
|
||||
r"你是经验丰富的翻译,请把以下学术文章段落翻译成中文,"
|
||||
r"并同时充分考虑中文的语法、清晰、简洁和整体可读性,"
|
||||
r"必要时,你可以修改整个句子的顺序以确保翻译后的段落符合中文的语言习惯。"
|
||||
r"你需要翻译的文本如下:"
|
||||
) + "\n\n",
|
||||
"Suffix": r"",
|
||||
},
|
||||
|
||||
|
||||
"英译中": {
|
||||
"Prefix": r"翻译成地道的中文:" + "\n\n",
|
||||
"Suffix": r"",
|
||||
"Visible": False,
|
||||
},
|
||||
|
||||
|
||||
"找图片": {
|
||||
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
|
||||
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL,"
|
||||
r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
|
||||
"Suffix": r"",
|
||||
"Visible": False,
|
||||
},
|
||||
|
||||
|
||||
"解释代码": {
|
||||
"Prefix": r"请解释以下代码:" + "\n```\n",
|
||||
"Suffix": "\n```\n",
|
||||
},
|
||||
|
||||
|
||||
"参考文献转Bib": {
|
||||
"Prefix": r"Here are some bibliography items, please transform them into bibtex style." +
|
||||
r"Note that, reference styles maybe more than one kind, you should transform each item correctly." +
|
||||
r"Items need to be transformed:",
|
||||
"Prefix": r"Here are some bibliography items, please transform them into bibtex style."
|
||||
r"Note that, reference styles maybe more than one kind, you should transform each item correctly."
|
||||
r"Items need to be transformed:" + "\n\n",
|
||||
"Visible": False,
|
||||
"Suffix": r"",
|
||||
"Visible": False,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def handle_core_functionality(additional_fn, inputs, history, chatbot):
|
||||
import core_functional
|
||||
importlib.reload(core_functional) # 热更新prompt
|
||||
core_functional = core_functional.get_core_functions()
|
||||
addition = chatbot._cookies['customize_fn_overwrite']
|
||||
if additional_fn in addition:
|
||||
# 自定义功能
|
||||
inputs = addition[additional_fn]["Prefix"] + inputs + addition[additional_fn]["Suffix"]
|
||||
return inputs, history
|
||||
else:
|
||||
# 预制功能
|
||||
if "PreProcess" in core_functional[additional_fn]:
|
||||
if core_functional[additional_fn]["PreProcess"] is not None:
|
||||
inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
||||
# 为字符串加上上面定义的前缀和后缀。
|
||||
inputs = apply_gpt_academic_string_mask_langbased(
|
||||
string = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"],
|
||||
lang_reference = inputs,
|
||||
)
|
||||
if core_functional[additional_fn].get("AutoClearHistory", False):
|
||||
history = []
|
||||
return inputs, history
|
||||
|
||||
if __name__ == "__main__":
|
||||
t = get_core_functions()["总结绘制脑图"]
|
||||
print(t["Prefix"] + t["Suffix"])
|
||||
File diff suppressed because it is too large
Load Diff
220
crazy_functions/Conversation_To_File.py
Normal file
220
crazy_functions/Conversation_To_File.py
Normal file
@@ -0,0 +1,220 @@
|
||||
from toolbox import CatchException, update_ui, promote_file_to_downloadzone, get_log_folder, get_user
|
||||
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
|
||||
import re
|
||||
|
||||
f_prefix = 'GPT-Academic对话存档'
|
||||
|
||||
def write_chat_to_file(chatbot, history=None, file_name=None):
|
||||
"""
|
||||
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
|
||||
"""
|
||||
import os
|
||||
import time
|
||||
from themes.theme import advanced_css
|
||||
|
||||
if file_name is None:
|
||||
file_name = f_prefix + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
|
||||
fp = os.path.join(get_log_folder(get_user(chatbot), plugin_name='chat_history'), file_name)
|
||||
|
||||
with open(fp, 'w', encoding='utf8') as f:
|
||||
from textwrap import dedent
|
||||
form = dedent("""
|
||||
<!DOCTYPE html><head><meta charset="utf-8"><title>对话存档</title><style>{CSS}</style></head>
|
||||
<body>
|
||||
<div class="test_temp1" style="width:10%; height: 500px; float:left;"></div>
|
||||
<div class="test_temp2" style="width:80%;padding: 40px;float:left;padding-left: 20px;padding-right: 20px;box-shadow: rgba(0, 0, 0, 0.2) 0px 0px 8px 8px;border-radius: 10px;">
|
||||
<div class="chat-body" style="display: flex;justify-content: center;flex-direction: column;align-items: center;flex-wrap: nowrap;">
|
||||
{CHAT_PREVIEW}
|
||||
<div></div>
|
||||
<div></div>
|
||||
<div style="text-align: center;width:80%;padding: 0px;float:left;padding-left:20px;padding-right:20px;box-shadow: rgba(0, 0, 0, 0.05) 0px 0px 1px 2px;border-radius: 1px;">对话(原始数据)</div>
|
||||
{HISTORY_PREVIEW}
|
||||
</div>
|
||||
</div>
|
||||
<div class="test_temp3" style="width:10%; height: 500px; float:left;"></div>
|
||||
</body>
|
||||
""")
|
||||
|
||||
qa_from = dedent("""
|
||||
<div class="QaBox" style="width:80%;padding: 20px;margin-bottom: 20px;box-shadow: rgb(0 255 159 / 50%) 0px 0px 1px 2px;border-radius: 4px;">
|
||||
<div class="Question" style="border-radius: 2px;">{QUESTION}</div>
|
||||
<hr color="blue" style="border-top: dotted 2px #ccc;">
|
||||
<div class="Answer" style="border-radius: 2px;">{ANSWER}</div>
|
||||
</div>
|
||||
""")
|
||||
|
||||
history_from = dedent("""
|
||||
<div class="historyBox" style="width:80%;padding: 0px;float:left;padding-left:20px;padding-right:20px;box-shadow: rgba(0, 0, 0, 0.05) 0px 0px 1px 2px;border-radius: 1px;">
|
||||
<div class="entry" style="border-radius: 2px;">{ENTRY}</div>
|
||||
</div>
|
||||
""")
|
||||
CHAT_PREVIEW_BUF = ""
|
||||
for i, contents in enumerate(chatbot):
|
||||
question, answer = contents[0], contents[1]
|
||||
if question is None: question = ""
|
||||
try: question = str(question)
|
||||
except: question = ""
|
||||
if answer is None: answer = ""
|
||||
try: answer = str(answer)
|
||||
except: answer = ""
|
||||
CHAT_PREVIEW_BUF += qa_from.format(QUESTION=question, ANSWER=answer)
|
||||
|
||||
HISTORY_PREVIEW_BUF = ""
|
||||
for h in history:
|
||||
HISTORY_PREVIEW_BUF += history_from.format(ENTRY=h)
|
||||
html_content = form.format(CHAT_PREVIEW=CHAT_PREVIEW_BUF, HISTORY_PREVIEW=HISTORY_PREVIEW_BUF, CSS=advanced_css)
|
||||
f.write(html_content)
|
||||
|
||||
promote_file_to_downloadzone(fp, rename_file=file_name, chatbot=chatbot)
|
||||
return '对话历史写入:' + fp
|
||||
|
||||
def gen_file_preview(file_name):
|
||||
try:
|
||||
with open(file_name, 'r', encoding='utf8') as f:
|
||||
file_content = f.read()
|
||||
# pattern to match the text between <head> and </head>
|
||||
pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL)
|
||||
file_content = re.sub(pattern, '', file_content)
|
||||
html, history = file_content.split('<hr color="blue"> \n\n 对话数据 (无渲染):\n')
|
||||
history = history.strip('<code>')
|
||||
history = history.strip('</code>')
|
||||
history = history.split("\n>>>")
|
||||
return list(filter(lambda x:x!="", history))[0][:100]
|
||||
except:
|
||||
return ""
|
||||
|
||||
def read_file_to_chat(chatbot, history, file_name):
|
||||
with open(file_name, 'r', encoding='utf8') as f:
|
||||
file_content = f.read()
|
||||
from bs4 import BeautifulSoup
|
||||
soup = BeautifulSoup(file_content, 'lxml')
|
||||
# 提取QaBox信息
|
||||
chatbot.clear()
|
||||
qa_box_list = []
|
||||
qa_boxes = soup.find_all("div", class_="QaBox")
|
||||
for box in qa_boxes:
|
||||
question = box.find("div", class_="Question").get_text(strip=False)
|
||||
answer = box.find("div", class_="Answer").get_text(strip=False)
|
||||
qa_box_list.append({"Question": question, "Answer": answer})
|
||||
chatbot.append([question, answer])
|
||||
# 提取historyBox信息
|
||||
history_box_list = []
|
||||
history_boxes = soup.find_all("div", class_="historyBox")
|
||||
for box in history_boxes:
|
||||
entry = box.find("div", class_="entry").get_text(strip=False)
|
||||
history_box_list.append(entry)
|
||||
history = history_box_list
|
||||
chatbot.append([None, f"[Local Message] 载入对话{len(qa_box_list)}条,上下文{len(history)}条。"])
|
||||
return chatbot, history
|
||||
|
||||
@CatchException
|
||||
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
file_name = plugin_kwargs.get("file_name", None)
|
||||
if (file_name is not None) and (file_name != "") and (not file_name.endswith('.html')): file_name += '.html'
|
||||
else: file_name = None
|
||||
|
||||
chatbot.append((None, f"[Local Message] {write_chat_to_file(chatbot, history, file_name)},您可以调用下拉菜单中的“载入对话历史存档”还原当下的对话。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
|
||||
|
||||
class Conversation_To_File_Wrap(GptAcademicPluginTemplate):
|
||||
def __init__(self):
|
||||
"""
|
||||
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
|
||||
"""
|
||||
pass
|
||||
|
||||
def define_arg_selection_menu(self):
|
||||
"""
|
||||
定义插件的二级选项菜单
|
||||
|
||||
第一个参数,名称`file_name`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
|
||||
"""
|
||||
gui_definition = {
|
||||
"file_name": ArgProperty(title="保存文件名", description="输入对话存档文件名,留空则使用时间作为文件名", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
|
||||
}
|
||||
return gui_definition
|
||||
|
||||
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
执行插件
|
||||
"""
|
||||
yield from 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def hide_cwd(str):
|
||||
import os
|
||||
current_path = os.getcwd()
|
||||
replace_path = "."
|
||||
return str.replace(current_path, replace_path)
|
||||
|
||||
@CatchException
|
||||
def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
from crazy_functions.crazy_utils import get_files_from_everything
|
||||
success, file_manifest, _ = get_files_from_everything(txt, type='.html')
|
||||
|
||||
if not success:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
import glob
|
||||
local_history = "<br/>".join([
|
||||
"`"+hide_cwd(f)+f" ({gen_file_preview(f)})"+"`"
|
||||
for f in glob.glob(
|
||||
f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html',
|
||||
recursive=True
|
||||
)])
|
||||
chatbot.append([f"正在查找对话历史文件(html格式): {txt}", f"找不到任何html文件: {txt}。但本地存储了以下历史文件,您可以将任意一个文件路径粘贴到输入区,然后重试:<br/>{local_history}"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
try:
|
||||
chatbot, history = read_file_to_chat(chatbot, history, file_manifest[0])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
except:
|
||||
chatbot.append([f"载入对话历史文件", f"对话历史文件损坏!"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
@CatchException
|
||||
def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
|
||||
import glob, os
|
||||
local_history = "<br/>".join([
|
||||
"`"+hide_cwd(f)+"`"
|
||||
for f in glob.glob(
|
||||
f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', recursive=True
|
||||
)])
|
||||
for f in glob.glob(f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', recursive=True):
|
||||
os.remove(f)
|
||||
chatbot.append([f"删除所有历史对话文件", f"已删除<br/>{local_history}"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
276
crazy_functions/Image_Generate.py
Normal file
276
crazy_functions/Image_Generate.py
Normal file
@@ -0,0 +1,276 @@
|
||||
from toolbox import CatchException, update_ui, get_conf, select_api_key, get_log_folder
|
||||
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicState
|
||||
|
||||
|
||||
def gen_image(llm_kwargs, prompt, resolution="1024x1024", model="dall-e-2", quality=None, style=None):
|
||||
import requests, json, time, os
|
||||
from request_llms.bridge_all import model_info
|
||||
|
||||
proxies = get_conf('proxies')
|
||||
# Set up OpenAI API key and model
|
||||
api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
|
||||
chat_endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
||||
# 'https://api.openai.com/v1/chat/completions'
|
||||
img_endpoint = chat_endpoint.replace('chat/completions','images/generations')
|
||||
# # Generate the image
|
||||
url = img_endpoint
|
||||
headers = {
|
||||
'Authorization': f"Bearer {api_key}",
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
data = {
|
||||
'prompt': prompt,
|
||||
'n': 1,
|
||||
'size': resolution,
|
||||
'model': model,
|
||||
'response_format': 'url'
|
||||
}
|
||||
if quality is not None:
|
||||
data['quality'] = quality
|
||||
if style is not None:
|
||||
data['style'] = style
|
||||
response = requests.post(url, headers=headers, json=data, proxies=proxies)
|
||||
# logger.info(response.content)
|
||||
try:
|
||||
image_url = json.loads(response.content.decode('utf8'))['data'][0]['url']
|
||||
except:
|
||||
raise RuntimeError(response.content.decode())
|
||||
# 文件保存到本地
|
||||
r = requests.get(image_url, proxies=proxies)
|
||||
file_path = f'{get_log_folder()}/image_gen/'
|
||||
os.makedirs(file_path, exist_ok=True)
|
||||
file_name = 'Image' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.png'
|
||||
with open(file_path+file_name, 'wb+') as f: f.write(r.content)
|
||||
|
||||
|
||||
return image_url, file_path+file_name
|
||||
|
||||
|
||||
def edit_image(llm_kwargs, prompt, image_path, resolution="1024x1024", model="dall-e-2"):
|
||||
import requests, json, time, os
|
||||
from request_llms.bridge_all import model_info
|
||||
|
||||
proxies = get_conf('proxies')
|
||||
api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
|
||||
chat_endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
|
||||
# 'https://api.openai.com/v1/chat/completions'
|
||||
img_endpoint = chat_endpoint.replace('chat/completions','images/edits')
|
||||
# # Generate the image
|
||||
url = img_endpoint
|
||||
n = 1
|
||||
headers = {
|
||||
'Authorization': f"Bearer {api_key}",
|
||||
}
|
||||
make_transparent(image_path, image_path+'.tsp.png')
|
||||
make_square_image(image_path+'.tsp.png', image_path+'.tspsq.png')
|
||||
resize_image(image_path+'.tspsq.png', image_path+'.ready.png', max_size=1024)
|
||||
image_path = image_path+'.ready.png'
|
||||
with open(image_path, 'rb') as f:
|
||||
file_content = f.read()
|
||||
files = {
|
||||
'image': (os.path.basename(image_path), file_content),
|
||||
# 'mask': ('mask.png', open('mask.png', 'rb'))
|
||||
'prompt': (None, prompt),
|
||||
"n": (None, str(n)),
|
||||
'size': (None, resolution),
|
||||
}
|
||||
|
||||
response = requests.post(url, headers=headers, files=files, proxies=proxies)
|
||||
# logger.info(response.content)
|
||||
try:
|
||||
image_url = json.loads(response.content.decode('utf8'))['data'][0]['url']
|
||||
except:
|
||||
raise RuntimeError(response.content.decode())
|
||||
# 文件保存到本地
|
||||
r = requests.get(image_url, proxies=proxies)
|
||||
file_path = f'{get_log_folder()}/image_gen/'
|
||||
os.makedirs(file_path, exist_ok=True)
|
||||
file_name = 'Image' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.png'
|
||||
with open(file_path+file_name, 'wb+') as f: f.write(r.content)
|
||||
|
||||
|
||||
return image_url, file_path+file_name
|
||||
|
||||
|
||||
@CatchException
|
||||
def 图片生成_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||||
chatbot 聊天显示框的句柄,用于显示给用户
|
||||
history 聊天历史,前情提要
|
||||
system_prompt 给gpt的静默提醒
|
||||
user_request 当前用户的请求信息(IP地址等)
|
||||
"""
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
if prompt.strip() == "":
|
||||
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
|
||||
return
|
||||
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 使用前请切换模型到GPT系列。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
resolution = plugin_kwargs.get("advanced_arg", '1024x1024')
|
||||
image_url, image_path = gen_image(llm_kwargs, prompt, resolution)
|
||||
chatbot.append([prompt,
|
||||
f'图像中转网址: <br/>`{image_url}`<br/>'+
|
||||
f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
|
||||
f'本地文件地址: <br/>`{image_path}`<br/>'+
|
||||
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
|
||||
])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
|
||||
|
||||
|
||||
@CatchException
|
||||
def 图片生成_DALLE3(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
if prompt.strip() == "":
|
||||
chatbot.append((prompt, "[Local Message] 图像生成提示为空白,请在“输入区”输入图像生成提示。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
|
||||
return
|
||||
chatbot.append(("您正在调用“图像生成”插件。", "[Local Message] 生成图像, 使用前请切换模型到GPT系列。如果中文Prompt效果不理想, 请尝试英文Prompt。正在处理中 ....."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
resolution_arg = plugin_kwargs.get("advanced_arg", '1024x1024-standard-vivid').lower()
|
||||
parts = resolution_arg.split('-')
|
||||
resolution = parts[0] # 解析分辨率
|
||||
quality = 'standard' # 质量与风格默认值
|
||||
style = 'vivid'
|
||||
# 遍历检查是否有额外参数
|
||||
for part in parts[1:]:
|
||||
if part in ['hd', 'standard']:
|
||||
quality = part
|
||||
elif part in ['vivid', 'natural']:
|
||||
style = part
|
||||
image_url, image_path = gen_image(llm_kwargs, prompt, resolution, model="dall-e-3", quality=quality, style=style)
|
||||
chatbot.append([prompt,
|
||||
f'图像中转网址: <br/>`{image_url}`<br/>'+
|
||||
f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
|
||||
f'本地文件地址: <br/>`{image_path}`<br/>'+
|
||||
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
|
||||
])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
|
||||
|
||||
|
||||
class ImageEditState(GptAcademicState):
|
||||
# 尚未完成
|
||||
def get_image_file(self, x):
|
||||
import os, glob
|
||||
if len(x) == 0: return False, None
|
||||
if not os.path.exists(x): return False, None
|
||||
if x.endswith('.png'): return True, x
|
||||
file_manifest = [f for f in glob.glob(f'{x}/**/*.png', recursive=True)]
|
||||
confirm = (len(file_manifest) >= 1 and file_manifest[0].endswith('.png') and os.path.exists(file_manifest[0]))
|
||||
file = None if not confirm else file_manifest[0]
|
||||
return confirm, file
|
||||
|
||||
def lock_plugin(self, chatbot):
|
||||
chatbot._cookies['lock_plugin'] = 'crazy_functions.Image_Generate->图片修改_DALLE2'
|
||||
self.dump_state(chatbot)
|
||||
|
||||
def unlock_plugin(self, chatbot):
|
||||
self.reset()
|
||||
chatbot._cookies['lock_plugin'] = None
|
||||
self.dump_state(chatbot)
|
||||
|
||||
def get_resolution(self, x):
|
||||
return (x in ['256x256', '512x512', '1024x1024']), x
|
||||
|
||||
def get_prompt(self, x):
|
||||
confirm = (len(x)>=5) and (not self.get_resolution(x)[0]) and (not self.get_image_file(x)[0])
|
||||
return confirm, x
|
||||
|
||||
def reset(self):
|
||||
self.req = [
|
||||
{'value':None, 'description': '请先上传图像(必须是.png格式), 然后再次点击本插件', 'verify_fn': self.get_image_file},
|
||||
{'value':None, 'description': '请输入分辨率,可选:256x256, 512x512 或 1024x1024, 然后再次点击本插件', 'verify_fn': self.get_resolution},
|
||||
{'value':None, 'description': '请输入修改需求,建议您使用英文提示词, 然后再次点击本插件', 'verify_fn': self.get_prompt},
|
||||
]
|
||||
self.info = ""
|
||||
|
||||
def feed(self, prompt, chatbot):
|
||||
for r in self.req:
|
||||
if r['value'] is None:
|
||||
confirm, res = r['verify_fn'](prompt)
|
||||
if confirm:
|
||||
r['value'] = res
|
||||
self.dump_state(chatbot)
|
||||
break
|
||||
return self
|
||||
|
||||
def next_req(self):
|
||||
for r in self.req:
|
||||
if r['value'] is None:
|
||||
return r['description']
|
||||
return "已经收集到所有信息"
|
||||
|
||||
def already_obtained_all_materials(self):
|
||||
return all([x['value'] is not None for x in self.req])
|
||||
|
||||
@CatchException
|
||||
def 图片修改_DALLE2(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 尚未完成
|
||||
history = [] # 清空历史
|
||||
state = ImageEditState.get_state(chatbot, ImageEditState)
|
||||
state = state.feed(prompt, chatbot)
|
||||
state.lock_plugin(chatbot)
|
||||
if not state.already_obtained_all_materials():
|
||||
chatbot.append(["图片修改\n\n1. 上传图片(图片中需要修改的位置用橡皮擦擦除为纯白色,即RGB=255,255,255)\n2. 输入分辨率 \n3. 输入修改需求", state.next_req()])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
return
|
||||
|
||||
image_path = state.req[0]['value']
|
||||
resolution = state.req[1]['value']
|
||||
prompt = state.req[2]['value']
|
||||
chatbot.append(["图片修改, 执行中", f"图片:`{image_path}`<br/>分辨率:`{resolution}`<br/>修改需求:`{prompt}`"])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
image_url, image_path = edit_image(llm_kwargs, prompt, image_path, resolution)
|
||||
chatbot.append([prompt,
|
||||
f'图像中转网址: <br/>`{image_url}`<br/>'+
|
||||
f'中转网址预览: <br/><div align="center"><img src="{image_url}"></div>'
|
||||
f'本地文件地址: <br/>`{image_path}`<br/>'+
|
||||
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
|
||||
])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 界面更新
|
||||
state.unlock_plugin(chatbot)
|
||||
|
||||
def make_transparent(input_image_path, output_image_path):
|
||||
from PIL import Image
|
||||
image = Image.open(input_image_path)
|
||||
image = image.convert("RGBA")
|
||||
data = image.getdata()
|
||||
new_data = []
|
||||
for item in data:
|
||||
if item[0] == 255 and item[1] == 255 and item[2] == 255:
|
||||
new_data.append((255, 255, 255, 0))
|
||||
else:
|
||||
new_data.append(item)
|
||||
image.putdata(new_data)
|
||||
image.save(output_image_path, "PNG")
|
||||
|
||||
def resize_image(input_path, output_path, max_size=1024):
|
||||
from PIL import Image
|
||||
with Image.open(input_path) as img:
|
||||
width, height = img.size
|
||||
if width > max_size or height > max_size:
|
||||
if width >= height:
|
||||
new_width = max_size
|
||||
new_height = int((max_size / width) * height)
|
||||
else:
|
||||
new_height = max_size
|
||||
new_width = int((max_size / height) * width)
|
||||
|
||||
resized_img = img.resize(size=(new_width, new_height))
|
||||
resized_img.save(output_path)
|
||||
else:
|
||||
img.save(output_path)
|
||||
|
||||
def make_square_image(input_path, output_path):
|
||||
from PIL import Image
|
||||
with Image.open(input_path) as img:
|
||||
width, height = img.size
|
||||
size = max(width, height)
|
||||
new_img = Image.new("RGBA", (size, size), color="black")
|
||||
new_img.paste(img, ((size - width) // 2, (size - height) // 2))
|
||||
new_img.save(output_path)
|
||||
56
crazy_functions/Image_Generate_Wrap.py
Normal file
56
crazy_functions/Image_Generate_Wrap.py
Normal file
@@ -0,0 +1,56 @@
|
||||
|
||||
from toolbox import get_conf, update_ui
|
||||
from crazy_functions.Image_Generate import 图片生成_DALLE2, 图片生成_DALLE3, 图片修改_DALLE2
|
||||
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
|
||||
|
||||
|
||||
class ImageGen_Wrap(GptAcademicPluginTemplate):
|
||||
def __init__(self):
|
||||
"""
|
||||
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
|
||||
"""
|
||||
pass
|
||||
|
||||
def define_arg_selection_menu(self):
|
||||
"""
|
||||
定义插件的二级选项菜单
|
||||
|
||||
第一个参数,名称`main_input`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
|
||||
第二个参数,名称`advanced_arg`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
|
||||
|
||||
"""
|
||||
gui_definition = {
|
||||
"main_input":
|
||||
ArgProperty(title="输入图片描述", description="需要生成图像的文本描述,尽量使用英文", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
|
||||
"model_name":
|
||||
ArgProperty(title="模型", options=["DALLE2", "DALLE3"], default_value="DALLE3", description="无", type="dropdown").model_dump_json(),
|
||||
"resolution":
|
||||
ArgProperty(title="分辨率", options=["256x256(限DALLE2)", "512x512(限DALLE2)", "1024x1024", "1792x1024(限DALLE3)", "1024x1792(限DALLE3)"], default_value="1024x1024", description="无", type="dropdown").model_dump_json(),
|
||||
"quality (仅DALLE3生效)":
|
||||
ArgProperty(title="质量", options=["standard", "hd"], default_value="standard", description="无", type="dropdown").model_dump_json(),
|
||||
"style (仅DALLE3生效)":
|
||||
ArgProperty(title="风格", options=["vivid", "natural"], default_value="vivid", description="无", type="dropdown").model_dump_json(),
|
||||
|
||||
}
|
||||
return gui_definition
|
||||
|
||||
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
执行插件
|
||||
"""
|
||||
# 分辨率
|
||||
resolution = plugin_kwargs["resolution"].replace("(限DALLE2)", "").replace("(限DALLE3)", "")
|
||||
|
||||
if plugin_kwargs["model_name"] == "DALLE2":
|
||||
plugin_kwargs["advanced_arg"] = resolution
|
||||
yield from 图片生成_DALLE2(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||
|
||||
elif plugin_kwargs["model_name"] == "DALLE3":
|
||||
quality = plugin_kwargs["quality (仅DALLE3生效)"]
|
||||
style = plugin_kwargs["style (仅DALLE3生效)"]
|
||||
plugin_kwargs["advanced_arg"] = f"{resolution}-{quality}-{style}"
|
||||
yield from 图片生成_DALLE3(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||
|
||||
else:
|
||||
chatbot.append([None, "抱歉,找不到该模型"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
365
crazy_functions/Internet_GPT.py
Normal file
365
crazy_functions/Internet_GPT.py
Normal file
@@ -0,0 +1,365 @@
|
||||
import requests
|
||||
import random
|
||||
import time
|
||||
import re
|
||||
import json
|
||||
from bs4 import BeautifulSoup
|
||||
from functools import lru_cache
|
||||
from itertools import zip_longest
|
||||
from check_proxy import check_proxy
|
||||
from toolbox import CatchException, update_ui, get_conf, update_ui_latest_msg
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
|
||||
from request_llms.bridge_all import model_info
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.prompts.internet import SearchOptimizerPrompt, SearchAcademicOptimizerPrompt
|
||||
|
||||
def search_optimizer(
|
||||
query,
|
||||
proxies,
|
||||
history,
|
||||
llm_kwargs,
|
||||
optimizer=1,
|
||||
categories="general",
|
||||
searxng_url=None,
|
||||
engines=None,
|
||||
):
|
||||
# ------------- < 第1步:尝试进行搜索优化 > -------------
|
||||
# * 增强优化,会尝试结合历史记录进行搜索优化
|
||||
if optimizer == 2:
|
||||
his = " "
|
||||
if len(history) == 0:
|
||||
pass
|
||||
else:
|
||||
for i, h in enumerate(history):
|
||||
if i % 2 == 0:
|
||||
his += f"Q: {h}\n"
|
||||
else:
|
||||
his += f"A: {h}\n"
|
||||
if categories == "general":
|
||||
sys_prompt = SearchOptimizerPrompt.format(query=query, history=his, num=4)
|
||||
elif categories == "science":
|
||||
sys_prompt = SearchAcademicOptimizerPrompt.format(query=query, history=his, num=4)
|
||||
else:
|
||||
his = " "
|
||||
if categories == "general":
|
||||
sys_prompt = SearchOptimizerPrompt.format(query=query, history=his, num=3)
|
||||
elif categories == "science":
|
||||
sys_prompt = SearchAcademicOptimizerPrompt.format(query=query, history=his, num=3)
|
||||
|
||||
mutable = ["", time.time(), ""]
|
||||
llm_kwargs["temperature"] = 0.8
|
||||
try:
|
||||
query_json = predict_no_ui_long_connection(
|
||||
inputs=query,
|
||||
llm_kwargs=llm_kwargs,
|
||||
history=[],
|
||||
sys_prompt=sys_prompt,
|
||||
observe_window=mutable,
|
||||
)
|
||||
except Exception:
|
||||
query_json = "null"
|
||||
#* 尝试解码优化后的搜索结果
|
||||
query_json = re.sub(r"```json|```", "", query_json)
|
||||
try:
|
||||
queries = json.loads(query_json)
|
||||
except Exception:
|
||||
#* 如果解码失败,降低温度再试一次
|
||||
try:
|
||||
llm_kwargs["temperature"] = 0.4
|
||||
query_json = predict_no_ui_long_connection(
|
||||
inputs=query,
|
||||
llm_kwargs=llm_kwargs,
|
||||
history=[],
|
||||
sys_prompt=sys_prompt,
|
||||
observe_window=mutable,
|
||||
)
|
||||
query_json = re.sub(r"```json|```", "", query_json)
|
||||
queries = json.loads(query_json)
|
||||
except Exception:
|
||||
#* 如果再次失败,直接返回原始问题
|
||||
queries = [query]
|
||||
links = []
|
||||
success = 0
|
||||
Exceptions = ""
|
||||
for q in queries:
|
||||
try:
|
||||
link = searxng_request(q, proxies, categories, searxng_url, engines=engines)
|
||||
if len(link) > 0:
|
||||
links.append(link[:-5])
|
||||
success += 1
|
||||
except Exception:
|
||||
Exceptions = Exception
|
||||
pass
|
||||
if success == 0:
|
||||
raise ValueError(f"在线搜索失败!\n{Exceptions}")
|
||||
# * 清洗搜索结果,依次放入每组第一,第二个搜索结果,并清洗重复的搜索结果
|
||||
seen_links = set()
|
||||
result = []
|
||||
for tuple in zip_longest(*links, fillvalue=None):
|
||||
for item in tuple:
|
||||
if item is not None:
|
||||
link = item["link"]
|
||||
if link not in seen_links:
|
||||
seen_links.add(link)
|
||||
result.append(item)
|
||||
return result
|
||||
|
||||
|
||||
@lru_cache
|
||||
def get_auth_ip():
|
||||
ip = check_proxy(None, return_ip=True)
|
||||
if ip is None:
|
||||
return '114.114.114.' + str(random.randint(1, 10))
|
||||
return ip
|
||||
|
||||
|
||||
def searxng_request(query, proxies, categories='general', searxng_url=None, engines=None):
|
||||
if searxng_url is None:
|
||||
urls = get_conf("SEARXNG_URLS")
|
||||
url = random.choice(urls)
|
||||
else:
|
||||
url = searxng_url
|
||||
|
||||
if engines == "Mixed":
|
||||
engines = None
|
||||
|
||||
if categories == 'general':
|
||||
params = {
|
||||
'q': query, # 搜索查询
|
||||
'format': 'json', # 输出格式为JSON
|
||||
'language': 'zh', # 搜索语言
|
||||
'engines': engines,
|
||||
}
|
||||
elif categories == 'science':
|
||||
params = {
|
||||
'q': query, # 搜索查询
|
||||
'format': 'json', # 输出格式为JSON
|
||||
'language': 'zh', # 搜索语言
|
||||
'categories': 'science'
|
||||
}
|
||||
else:
|
||||
raise ValueError('不支持的检索类型')
|
||||
|
||||
headers = {
|
||||
'Accept-Language': 'zh-CN,zh;q=0.9',
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
|
||||
'X-Forwarded-For': get_auth_ip(),
|
||||
'X-Real-IP': get_auth_ip()
|
||||
}
|
||||
results = []
|
||||
response = requests.post(url, params=params, headers=headers, proxies=proxies, timeout=30)
|
||||
if response.status_code == 200:
|
||||
json_result = response.json()
|
||||
for result in json_result['results']:
|
||||
item = {
|
||||
"title": result.get("title", ""),
|
||||
"source": result.get("engines", "unknown"),
|
||||
"content": result.get("content", ""),
|
||||
"link": result["url"],
|
||||
}
|
||||
results.append(item)
|
||||
return results
|
||||
else:
|
||||
if response.status_code == 429:
|
||||
raise ValueError("Searxng(在线搜索服务)当前使用人数太多,请稍后。")
|
||||
else:
|
||||
raise ValueError("在线搜索失败,状态码: " + str(response.status_code) + '\t' + response.content.decode('utf-8'))
|
||||
|
||||
|
||||
def scrape_text(url, proxies) -> str:
|
||||
"""Scrape text from a webpage
|
||||
|
||||
Args:
|
||||
url (str): The URL to scrape text from
|
||||
|
||||
Returns:
|
||||
str: The scraped text
|
||||
"""
|
||||
from loguru import logger
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
|
||||
'Content-Type': 'text/plain',
|
||||
}
|
||||
|
||||
# 首先采用Jina进行文本提取
|
||||
if get_conf("JINA_API_KEY"):
|
||||
try: return jina_scrape_text(url)
|
||||
except: logger.debug("Jina API 请求失败,回到旧方法")
|
||||
|
||||
try:
|
||||
response = requests.get(url, headers=headers, proxies=proxies, timeout=8)
|
||||
if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
|
||||
except:
|
||||
return "无法连接到该网页"
|
||||
soup = BeautifulSoup(response.text, "html.parser")
|
||||
for script in soup(["script", "style"]):
|
||||
script.extract()
|
||||
text = soup.get_text()
|
||||
lines = (line.strip() for line in text.splitlines())
|
||||
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
|
||||
text = "\n".join(chunk for chunk in chunks if chunk)
|
||||
return text
|
||||
|
||||
|
||||
def jina_scrape_text(url) -> str:
|
||||
"jina_39727421c8fa4e4fa9bd698e5211feaaDyGeVFESNrRaepWiLT0wmHYJSh-d"
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
|
||||
'Content-Type': 'text/plain',
|
||||
"X-Retain-Images": "none",
|
||||
"Authorization": f'Bearer {get_conf("JINA_API_KEY")}'
|
||||
}
|
||||
response = requests.get("https://r.jina.ai/" + url, headers=headers, proxies=None, timeout=8)
|
||||
if response.status_code != 200:
|
||||
raise ValueError("Jina API 请求失败,开始尝试旧方法!" + response.text)
|
||||
if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
|
||||
result = response.text
|
||||
result = result.replace("\\[", "[").replace("\\]", "]").replace("\\(", "(").replace("\\)", ")")
|
||||
return response.text
|
||||
|
||||
|
||||
def internet_search_with_analysis_prompt(prompt, analysis_prompt, llm_kwargs, chatbot):
|
||||
from toolbox import get_conf
|
||||
proxies = get_conf('proxies')
|
||||
categories = 'general'
|
||||
searxng_url = None # 使用默认的searxng_url
|
||||
engines = None # 使用默认的搜索引擎
|
||||
yield from update_ui_latest_msg(lastmsg=f"检索中: {prompt} ...", chatbot=chatbot, history=[], delay=1)
|
||||
urls = searxng_request(prompt, proxies, categories, searxng_url, engines=engines)
|
||||
yield from update_ui_latest_msg(lastmsg=f"依次访问搜索到的网站 ...", chatbot=chatbot, history=[], delay=1)
|
||||
if len(urls) == 0:
|
||||
return None
|
||||
max_search_result = 5 # 最多收纳多少个网页的结果
|
||||
history = []
|
||||
for index, url in enumerate(urls[:max_search_result]):
|
||||
yield from update_ui_latest_msg(lastmsg=f"依次访问搜索到的网站: {url['link']} ...", chatbot=chatbot, history=[], delay=1)
|
||||
res = scrape_text(url['link'], proxies)
|
||||
prefix = f"第{index}份搜索结果 [源自{url['source'][0]}搜索] ({url['title'][:25]}):"
|
||||
history.extend([prefix, res])
|
||||
i_say = f"从以上搜索结果中抽取信息,然后回答问题:{prompt} {analysis_prompt}"
|
||||
i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token
|
||||
inputs=i_say,
|
||||
history=history,
|
||||
max_token_limit=8192
|
||||
)
|
||||
gpt_say = predict_no_ui_long_connection(
|
||||
inputs=i_say,
|
||||
llm_kwargs=llm_kwargs,
|
||||
history=history,
|
||||
sys_prompt="请从搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。",
|
||||
console_silence=False,
|
||||
)
|
||||
return gpt_say
|
||||
|
||||
@CatchException
|
||||
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
optimizer_history = history[:-8]
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
chatbot.append((f"请结合互联网信息回答以下问题:{txt}", "检索中..."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# ------------- < 第1步:爬取搜索引擎的结果 > -------------
|
||||
from toolbox import get_conf
|
||||
proxies = get_conf('proxies')
|
||||
categories = plugin_kwargs.get('categories', 'general')
|
||||
searxng_url = plugin_kwargs.get('searxng_url', None)
|
||||
engines = plugin_kwargs.get('engine', None)
|
||||
optimizer = plugin_kwargs.get('optimizer', "关闭")
|
||||
if optimizer == "关闭":
|
||||
urls = searxng_request(txt, proxies, categories, searxng_url, engines=engines)
|
||||
else:
|
||||
urls = search_optimizer(txt, proxies, optimizer_history, llm_kwargs, optimizer, categories, searxng_url, engines)
|
||||
history = []
|
||||
if len(urls) == 0:
|
||||
chatbot.append((f"结论:{txt}", "[Local Message] 受到限制,无法从searxng获取信息!请尝试更换搜索引擎。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# ------------- < 第2步:依次访问网页 > -------------
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from textwrap import dedent
|
||||
max_search_result = 5 # 最多收纳多少个网页的结果
|
||||
if optimizer == "开启(增强)":
|
||||
max_search_result = 8
|
||||
template = dedent("""
|
||||
<details>
|
||||
<summary>{TITLE}</summary>
|
||||
<div class="search_result">{URL}</div>
|
||||
<div class="search_result">{CONTENT}</div>
|
||||
</details>
|
||||
""")
|
||||
|
||||
buffer = ""
|
||||
|
||||
# 创建线程池
|
||||
with ThreadPoolExecutor(max_workers=5) as executor:
|
||||
# 提交任务到线程池
|
||||
futures = []
|
||||
for index, url in enumerate(urls[:max_search_result]):
|
||||
future = executor.submit(scrape_text, url['link'], proxies)
|
||||
futures.append((index, future, url))
|
||||
|
||||
# 处理完成的任务
|
||||
for index, future, url in futures:
|
||||
# 开始
|
||||
prefix = f"正在加载 第{index+1}份搜索结果 [源自{url['source'][0]}搜索] ({url['title'][:25]}):"
|
||||
string_structure = template.format(TITLE=prefix, URL=url['link'], CONTENT="正在加载,请稍后 ......")
|
||||
yield from update_ui_latest_msg(lastmsg=(buffer + string_structure), chatbot=chatbot, history=history, delay=0.1) # 刷新界面
|
||||
|
||||
# 获取结果
|
||||
res = future.result()
|
||||
|
||||
# 显示结果
|
||||
prefix = f"第{index+1}份搜索结果 [源自{url['source'][0]}搜索] ({url['title'][:25]}):"
|
||||
string_structure = template.format(TITLE=prefix, URL=url['link'], CONTENT=res[:1000] + "......")
|
||||
buffer += string_structure
|
||||
|
||||
# 更新历史
|
||||
history.extend([prefix, res])
|
||||
yield from update_ui_latest_msg(lastmsg=buffer, chatbot=chatbot, history=history, delay=0.1) # 刷新界面
|
||||
|
||||
# ------------- < 第3步:ChatGPT综合 > -------------
|
||||
if (optimizer != "开启(增强)"):
|
||||
i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}"
|
||||
i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token
|
||||
inputs=i_say,
|
||||
history=history,
|
||||
max_token_limit=min(model_info[llm_kwargs['llm_model']]['max_token']*3//4, 8192)
|
||||
)
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say, inputs_show_user=i_say,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
||||
sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。"
|
||||
)
|
||||
chatbot[-1] = (i_say, gpt_say)
|
||||
history.append(i_say);history.append(gpt_say)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||
|
||||
#* 或者使用搜索优化器,这样可以保证后续问答能读取到有效的历史记录
|
||||
else:
|
||||
i_say = f"从以上搜索结果中抽取与问题:{txt} 相关的信息:"
|
||||
i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token
|
||||
inputs=i_say,
|
||||
history=history,
|
||||
max_token_limit=min(model_info[llm_kwargs['llm_model']]['max_token']*3//4, 8192)
|
||||
)
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say, inputs_show_user=i_say,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
||||
sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的三个搜索结果进行总结"
|
||||
)
|
||||
chatbot[-1] = (i_say, gpt_say)
|
||||
history = []
|
||||
history.append(i_say);history.append(gpt_say)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||
|
||||
# ------------- < 第4步:根据综合回答问题 > -------------
|
||||
i_say = f"请根据以上搜索结果回答问题:{txt}"
|
||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=i_say, inputs_show_user=i_say,
|
||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
||||
sys_prompt="请根据给定的若干条搜索结果回答问题"
|
||||
)
|
||||
chatbot[-1] = (i_say, gpt_say)
|
||||
history.append(i_say);history.append(gpt_say)
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
49
crazy_functions/Internet_GPT_Wrap.py
Normal file
49
crazy_functions/Internet_GPT_Wrap.py
Normal file
@@ -0,0 +1,49 @@
|
||||
import random
|
||||
from toolbox import get_conf
|
||||
from crazy_functions.Internet_GPT import 连接网络回答问题
|
||||
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
|
||||
|
||||
|
||||
class NetworkGPT_Wrap(GptAcademicPluginTemplate):
|
||||
def __init__(self):
|
||||
"""
|
||||
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
|
||||
"""
|
||||
pass
|
||||
|
||||
def define_arg_selection_menu(self):
|
||||
"""
|
||||
定义插件的二级选项菜单
|
||||
|
||||
第一个参数,名称`main_input`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
|
||||
第二个参数,名称`advanced_arg`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
|
||||
第三个参数,名称`allow_cache`,参数`type`声明这是一个下拉菜单,下拉菜单上方显示`title`+`description`,下拉菜单的选项为`options`,`default_value`为下拉菜单默认值;
|
||||
|
||||
"""
|
||||
urls = get_conf("SEARXNG_URLS")
|
||||
url = random.choice(urls)
|
||||
|
||||
gui_definition = {
|
||||
"main_input":
|
||||
ArgProperty(title="输入问题", description="待通过互联网检索的问题,会自动读取输入框内容", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
|
||||
"categories":
|
||||
ArgProperty(title="搜索分类", options=["网页", "学术论文"], default_value="网页", description="无", type="dropdown").model_dump_json(),
|
||||
"engine":
|
||||
ArgProperty(title="选择搜索引擎", options=["Mixed", "bing", "google", "duckduckgo"], default_value="google", description="无", type="dropdown").model_dump_json(),
|
||||
"optimizer":
|
||||
ArgProperty(title="搜索优化", options=["关闭", "开启", "开启(增强)"], default_value="关闭", description="是否使用搜索增强。注意这可能会消耗较多token", type="dropdown").model_dump_json(),
|
||||
"searxng_url":
|
||||
ArgProperty(title="Searxng服务地址", description="输入Searxng的地址", default_value=url, type="string").model_dump_json(), # 主输入,自动从输入框同步
|
||||
|
||||
}
|
||||
return gui_definition
|
||||
|
||||
def execute(txt, llm_kwargs, plugin_kwargs:dict, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
执行插件
|
||||
"""
|
||||
if plugin_kwargs.get("categories", None) == "网页": plugin_kwargs["categories"] = "general"
|
||||
elif plugin_kwargs.get("categories", None) == "学术论文": plugin_kwargs["categories"] = "science"
|
||||
else: plugin_kwargs["categories"] = "general"
|
||||
yield from 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||
|
||||
595
crazy_functions/Latex_Function.py
Normal file
595
crazy_functions/Latex_Function.py
Normal file
@@ -0,0 +1,595 @@
|
||||
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone, check_repeat_upload, map_file_to_sha256
|
||||
from toolbox import CatchException, report_exception, update_ui_latest_msg, zip_result, gen_time_str
|
||||
from functools import partial
|
||||
from loguru import logger
|
||||
|
||||
import glob, os, requests, time, json, tarfile, threading
|
||||
|
||||
pj = os.path.join
|
||||
ARXIV_CACHE_DIR = get_conf("ARXIV_CACHE_DIR")
|
||||
|
||||
|
||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
||||
def switch_prompt(pfg, mode, more_requirement):
|
||||
"""
|
||||
Generate prompts and system prompts based on the mode for proofreading or translating.
|
||||
Args:
|
||||
- pfg: Proofreader or Translator instance.
|
||||
- mode: A string specifying the mode, either 'proofread' or 'translate_zh'.
|
||||
|
||||
Returns:
|
||||
- inputs_array: A list of strings containing prompts for users to respond to.
|
||||
- sys_prompt_array: A list of strings containing prompts for system prompts.
|
||||
"""
|
||||
n_split = len(pfg.sp_file_contents)
|
||||
if mode == 'proofread_en':
|
||||
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
|
||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + more_requirement +
|
||||
r"Answer me only with the revised text:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
|
||||
elif mode == 'translate_zh':
|
||||
inputs_array = [
|
||||
r"Below is a section from an English academic paper, translate it into Chinese. " + more_requirement +
|
||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
|
||||
r"Answer me only with the translated text:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
sys_prompt_array = ["You are a professional translator." for _ in range(n_split)]
|
||||
else:
|
||||
assert False, "未知指令"
|
||||
return inputs_array, sys_prompt_array
|
||||
|
||||
|
||||
def descend_to_extracted_folder_if_exist(project_folder):
|
||||
"""
|
||||
Descend into the extracted folder if it exists, otherwise return the original folder.
|
||||
|
||||
Args:
|
||||
- project_folder: A string specifying the folder path.
|
||||
|
||||
Returns:
|
||||
- A string specifying the path to the extracted folder, or the original folder if there is no extracted folder.
|
||||
"""
|
||||
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
|
||||
if len(maybe_dir) == 0: return project_folder
|
||||
if maybe_dir[0].endswith('.extract'): return maybe_dir[0]
|
||||
return project_folder
|
||||
|
||||
|
||||
def move_project(project_folder, arxiv_id=None):
|
||||
"""
|
||||
Create a new work folder and copy the project folder to it.
|
||||
|
||||
Args:
|
||||
- project_folder: A string specifying the folder path of the project.
|
||||
|
||||
Returns:
|
||||
- A string specifying the path to the new work folder.
|
||||
"""
|
||||
import shutil, time
|
||||
time.sleep(2) # avoid time string conflict
|
||||
if arxiv_id is not None:
|
||||
new_workfolder = pj(ARXIV_CACHE_DIR, arxiv_id, 'workfolder')
|
||||
else:
|
||||
new_workfolder = f'{get_log_folder()}/{gen_time_str()}'
|
||||
try:
|
||||
shutil.rmtree(new_workfolder)
|
||||
except:
|
||||
pass
|
||||
|
||||
# align subfolder if there is a folder wrapper
|
||||
items = glob.glob(pj(project_folder, '*'))
|
||||
items = [item for item in items if os.path.basename(item) != '__MACOSX']
|
||||
if len(glob.glob(pj(project_folder, '*.tex'))) == 0 and len(items) == 1:
|
||||
if os.path.isdir(items[0]): project_folder = items[0]
|
||||
|
||||
shutil.copytree(src=project_folder, dst=new_workfolder)
|
||||
return new_workfolder
|
||||
|
||||
|
||||
def arxiv_download(chatbot, history, txt, allow_cache=True):
|
||||
def check_cached_translation_pdf(arxiv_id):
|
||||
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'translation')
|
||||
if not os.path.exists(translation_dir):
|
||||
os.makedirs(translation_dir)
|
||||
target_file = pj(translation_dir, 'translate_zh.pdf')
|
||||
if os.path.exists(target_file):
|
||||
promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
|
||||
target_file_compare = pj(translation_dir, 'comparison.pdf')
|
||||
if os.path.exists(target_file_compare):
|
||||
promote_file_to_downloadzone(target_file_compare, rename_file=None, chatbot=chatbot)
|
||||
return target_file
|
||||
return False
|
||||
|
||||
def is_float(s):
|
||||
try:
|
||||
float(s)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
if txt.startswith('https://arxiv.org/pdf/'):
|
||||
arxiv_id = txt.split('/')[-1] # 2402.14207v2.pdf
|
||||
txt = arxiv_id.split('v')[0] # 2402.14207
|
||||
|
||||
if ('.' in txt) and ('/' not in txt) and is_float(txt): # is arxiv ID
|
||||
txt = 'https://arxiv.org/abs/' + txt.strip()
|
||||
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
|
||||
txt = 'https://arxiv.org/abs/' + txt[:10]
|
||||
|
||||
if not txt.startswith('https://arxiv.org'):
|
||||
return txt, None # 是本地文件,跳过下载
|
||||
|
||||
# <-------------- inspect format ------------->
|
||||
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
time.sleep(1) # 刷新界面
|
||||
|
||||
url_ = txt # https://arxiv.org/abs/1707.06690
|
||||
|
||||
if not txt.startswith('https://arxiv.org/abs/'):
|
||||
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}。"
|
||||
yield from update_ui_latest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
|
||||
return msg, None
|
||||
# <-------------- set format ------------->
|
||||
arxiv_id = url_.split('/abs/')[-1]
|
||||
if 'v' in arxiv_id: arxiv_id = arxiv_id[:10]
|
||||
cached_translation_pdf = check_cached_translation_pdf(arxiv_id)
|
||||
if cached_translation_pdf and allow_cache: return cached_translation_pdf, arxiv_id
|
||||
|
||||
extract_dst = pj(ARXIV_CACHE_DIR, arxiv_id, 'extract')
|
||||
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
|
||||
dst = pj(translation_dir, arxiv_id + '.tar')
|
||||
os.makedirs(translation_dir, exist_ok=True)
|
||||
# <-------------- download arxiv source file ------------->
|
||||
|
||||
def fix_url_and_download():
|
||||
# for url_tar in [url_.replace('/abs/', '/e-print/'), url_.replace('/abs/', '/src/')]:
|
||||
for url_tar in [url_.replace('/abs/', '/src/'), url_.replace('/abs/', '/e-print/')]:
|
||||
proxies = get_conf('proxies')
|
||||
r = requests.get(url_tar, proxies=proxies)
|
||||
if r.status_code == 200:
|
||||
with open(dst, 'wb+') as f:
|
||||
f.write(r.content)
|
||||
return True
|
||||
return False
|
||||
|
||||
if os.path.exists(dst) and allow_cache:
|
||||
yield from update_ui_latest_msg(f"调用缓存 {arxiv_id}", chatbot=chatbot, history=history) # 刷新界面
|
||||
success = True
|
||||
else:
|
||||
yield from update_ui_latest_msg(f"开始下载 {arxiv_id}", chatbot=chatbot, history=history) # 刷新界面
|
||||
success = fix_url_and_download()
|
||||
yield from update_ui_latest_msg(f"下载完成 {arxiv_id}", chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
if not success:
|
||||
yield from update_ui_latest_msg(f"下载失败 {arxiv_id}", chatbot=chatbot, history=history)
|
||||
raise tarfile.ReadError(f"论文下载失败 {arxiv_id}")
|
||||
|
||||
# <-------------- extract file ------------->
|
||||
from toolbox import extract_archive
|
||||
try:
|
||||
extract_archive(file_path=dst, dest_dir=extract_dst)
|
||||
except tarfile.ReadError:
|
||||
os.remove(dst)
|
||||
raise tarfile.ReadError(f"论文下载失败")
|
||||
return extract_dst, arxiv_id
|
||||
|
||||
|
||||
def pdf2tex_project(pdf_file_path, plugin_kwargs):
|
||||
if plugin_kwargs["method"] == "MATHPIX":
|
||||
# Mathpix API credentials
|
||||
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
|
||||
headers = {"app_id": app_id, "app_key": app_key}
|
||||
|
||||
# Step 1: Send PDF file for processing
|
||||
options = {
|
||||
"conversion_formats": {"tex.zip": True},
|
||||
"math_inline_delimiters": ["$", "$"],
|
||||
"rm_spaces": True
|
||||
}
|
||||
|
||||
response = requests.post(url="https://api.mathpix.com/v3/pdf",
|
||||
headers=headers,
|
||||
data={"options_json": json.dumps(options)},
|
||||
files={"file": open(pdf_file_path, "rb")})
|
||||
|
||||
if response.ok:
|
||||
pdf_id = response.json()["pdf_id"]
|
||||
logger.info(f"PDF processing initiated. PDF ID: {pdf_id}")
|
||||
|
||||
# Step 2: Check processing status
|
||||
while True:
|
||||
conversion_response = requests.get(f"https://api.mathpix.com/v3/pdf/{pdf_id}", headers=headers)
|
||||
conversion_data = conversion_response.json()
|
||||
|
||||
if conversion_data["status"] == "completed":
|
||||
logger.info("PDF processing completed.")
|
||||
break
|
||||
elif conversion_data["status"] == "error":
|
||||
logger.info("Error occurred during processing.")
|
||||
else:
|
||||
logger.info(f"Processing status: {conversion_data['status']}")
|
||||
time.sleep(5) # wait for a few seconds before checking again
|
||||
|
||||
# Step 3: Save results to local files
|
||||
output_dir = os.path.join(os.path.dirname(pdf_file_path), 'mathpix_output')
|
||||
if not os.path.exists(output_dir):
|
||||
os.makedirs(output_dir)
|
||||
|
||||
url = f"https://api.mathpix.com/v3/pdf/{pdf_id}.tex"
|
||||
response = requests.get(url, headers=headers)
|
||||
file_name_wo_dot = '_'.join(os.path.basename(pdf_file_path).split('.')[:-1])
|
||||
output_name = f"{file_name_wo_dot}.tex.zip"
|
||||
output_path = os.path.join(output_dir, output_name)
|
||||
with open(output_path, "wb") as output_file:
|
||||
output_file.write(response.content)
|
||||
logger.info(f"tex.zip file saved at: {output_path}")
|
||||
|
||||
import zipfile
|
||||
unzip_dir = os.path.join(output_dir, file_name_wo_dot)
|
||||
with zipfile.ZipFile(output_path, 'r') as zip_ref:
|
||||
zip_ref.extractall(unzip_dir)
|
||||
|
||||
return unzip_dir
|
||||
|
||||
else:
|
||||
logger.error(f"Error sending PDF for processing. Status code: {response.status_code}")
|
||||
return None
|
||||
else:
|
||||
from crazy_functions.pdf_fns.parse_pdf_via_doc2x import 解析PDF_DOC2X_转Latex
|
||||
unzip_dir = 解析PDF_DOC2X_转Latex(pdf_file_path)
|
||||
return unzip_dir
|
||||
|
||||
|
||||
|
||||
|
||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# <-------------- information about this plugin ------------->
|
||||
chatbot.append(["函数插件功能?",
|
||||
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# <-------------- more requirements ------------->
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||
|
||||
# <-------------- check deps ------------->
|
||||
try:
|
||||
import glob, os, time, subprocess
|
||||
subprocess.Popen(['pdflatex', '-version'])
|
||||
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
||||
except Exception as e:
|
||||
chatbot.append([f"解析项目: {txt}",
|
||||
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# <-------------- clear history and read input ------------->
|
||||
history = []
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# <-------------- if is a zip/tar file ------------->
|
||||
project_folder = descend_to_extracted_folder_if_exist(project_folder)
|
||||
|
||||
# <-------------- move latex project away from temp folder ------------->
|
||||
from shared_utils.fastapi_server import validate_path_safety
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
project_folder = move_project(project_folder, arxiv_id=None)
|
||||
|
||||
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||
if not os.path.exists(project_folder + '/merge_proofread_en.tex'):
|
||||
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
chatbot, history, system_prompt, mode='proofread_en',
|
||||
switch_prompt=_switch_prompt_)
|
||||
|
||||
# <-------------- compile PDF ------------->
|
||||
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
|
||||
main_file_modified='merge_proofread_en',
|
||||
work_folder_original=project_folder, work_folder_modified=project_folder,
|
||||
work_folder=project_folder)
|
||||
|
||||
# <-------------- zip PDF ------------->
|
||||
zip_res = zip_result(project_folder)
|
||||
if success:
|
||||
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history);
|
||||
time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
else:
|
||||
chatbot.append((f"失败了",
|
||||
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+Conversation_To_File进行反馈 ...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history);
|
||||
time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
|
||||
# <-------------- we are done ------------->
|
||||
return success
|
||||
|
||||
|
||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
|
||||
@CatchException
|
||||
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# <-------------- information about this plugin ------------->
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# <-------------- more requirements ------------->
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||
|
||||
no_cache = ("--no-cache" in more_req)
|
||||
if no_cache: more_req = more_req.replace("--no-cache", "").strip()
|
||||
|
||||
allow_gptac_cloud_io = ("--allow-cloudio" in more_req) # 从云端下载翻译结果,以及上传翻译结果到云端
|
||||
if allow_gptac_cloud_io: more_req = more_req.replace("--allow-cloudio", "").strip()
|
||||
|
||||
allow_cache = not no_cache
|
||||
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||
|
||||
|
||||
# <-------------- check deps ------------->
|
||||
try:
|
||||
import glob, os, time, subprocess
|
||||
subprocess.Popen(['pdflatex', '-version'])
|
||||
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
||||
except Exception as e:
|
||||
chatbot.append([f"解析项目: {txt}",
|
||||
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# <-------------- clear history and read input ------------->
|
||||
history = []
|
||||
try:
|
||||
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
|
||||
except tarfile.ReadError as e:
|
||||
yield from update_ui_latest_msg(
|
||||
"无法自动下载该论文的Latex源码,请前往arxiv打开此论文下载页面,点other Formats,然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。",
|
||||
chatbot=chatbot, history=history)
|
||||
return
|
||||
|
||||
if txt.endswith('.pdf'):
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"发现已经存在翻译好的PDF文档")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# #################################################################
|
||||
if allow_gptac_cloud_io and arxiv_id:
|
||||
# 访问 GPTAC学术云,查询云端是否存在该论文的翻译版本
|
||||
from crazy_functions.latex_fns.latex_actions import check_gptac_cloud
|
||||
success, downloaded = check_gptac_cloud(arxiv_id, chatbot)
|
||||
if success:
|
||||
chatbot.append([
|
||||
f"检测到GPTAC云端存在翻译版本, 如果不满意翻译结果, 请禁用云端分享, 然后重新执行。",
|
||||
None
|
||||
])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
return
|
||||
#################################################################
|
||||
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无法处理: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# <-------------- if is a zip/tar file ------------->
|
||||
project_folder = descend_to_extracted_folder_if_exist(project_folder)
|
||||
|
||||
# <-------------- move latex project away from temp folder ------------->
|
||||
from shared_utils.fastapi_server import validate_path_safety
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
project_folder = move_project(project_folder, arxiv_id)
|
||||
|
||||
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
|
||||
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
chatbot, history, system_prompt, mode='translate_zh',
|
||||
switch_prompt=_switch_prompt_)
|
||||
|
||||
# <-------------- compile PDF ------------->
|
||||
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
|
||||
main_file_modified='merge_translate_zh', mode='translate_zh',
|
||||
work_folder_original=project_folder, work_folder_modified=project_folder,
|
||||
work_folder=project_folder)
|
||||
|
||||
# <-------------- zip PDF ------------->
|
||||
zip_res = zip_result(project_folder)
|
||||
if success:
|
||||
if allow_gptac_cloud_io and arxiv_id:
|
||||
# 如果用户允许,我们将翻译好的arxiv论文PDF上传到GPTAC学术云
|
||||
from crazy_functions.latex_fns.latex_actions import upload_to_gptac_cloud_if_user_allow
|
||||
threading.Thread(target=upload_to_gptac_cloud_if_user_allow,
|
||||
args=(chatbot, arxiv_id), daemon=True).start()
|
||||
|
||||
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
|
||||
else:
|
||||
chatbot.append((f"失败了",
|
||||
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体(见Github wiki) ...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
|
||||
# <-------------- we are done ------------->
|
||||
return success
|
||||
|
||||
|
||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 插件主程序3 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
|
||||
@CatchException
|
||||
def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
# <-------------- information about this plugin ------------->
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"将PDF转换为Latex项目,翻译为中文后重新编译为PDF。函数插件贡献者: Marroh。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# <-------------- more requirements ------------->
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||
no_cache = more_req.startswith("--no-cache")
|
||||
if no_cache: more_req.lstrip("--no-cache")
|
||||
allow_cache = not no_cache
|
||||
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||
|
||||
# <-------------- check deps ------------->
|
||||
try:
|
||||
import glob, os, time, subprocess
|
||||
subprocess.Popen(['pdflatex', '-version'])
|
||||
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
||||
except Exception as e:
|
||||
chatbot.append([f"解析项目: {txt}",
|
||||
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# <-------------- clear history and read input ------------->
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无法处理: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.pdf文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
if len(file_manifest) != 1:
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"不支持同时处理多个pdf文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
if plugin_kwargs.get("method", "") == 'MATHPIX':
|
||||
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
|
||||
if len(app_id) == 0 or len(app_key) == 0:
|
||||
report_exception(chatbot, history, a="缺失 MATHPIX_APPID 和 MATHPIX_APPKEY。", b=f"请配置 MATHPIX_APPID 和 MATHPIX_APPKEY")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
if plugin_kwargs.get("method", "") == 'DOC2X':
|
||||
app_id, app_key = "", ""
|
||||
DOC2X_API_KEY = get_conf('DOC2X_API_KEY')
|
||||
if len(DOC2X_API_KEY) == 0:
|
||||
report_exception(chatbot, history, a="缺失 DOC2X_API_KEY。", b=f"请配置 DOC2X_API_KEY")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
hash_tag = map_file_to_sha256(file_manifest[0])
|
||||
|
||||
# # <-------------- check repeated pdf ------------->
|
||||
# chatbot.append([f"检查PDF是否被重复上传", "正在检查..."])
|
||||
# yield from update_ui(chatbot=chatbot, history=history)
|
||||
# repeat, project_folder = check_repeat_upload(file_manifest[0], hash_tag)
|
||||
|
||||
# if repeat:
|
||||
# yield from update_ui_latest_msg(f"发现重复上传,请查收结果(压缩包)...", chatbot=chatbot, history=history)
|
||||
# try:
|
||||
# translate_pdf = [f for f in glob.glob(f'{project_folder}/**/merge_translate_zh.pdf', recursive=True)][0]
|
||||
# promote_file_to_downloadzone(translate_pdf, rename_file=None, chatbot=chatbot)
|
||||
# comparison_pdf = [f for f in glob.glob(f'{project_folder}/**/comparison.pdf', recursive=True)][0]
|
||||
# promote_file_to_downloadzone(comparison_pdf, rename_file=None, chatbot=chatbot)
|
||||
# zip_res = zip_result(project_folder)
|
||||
# promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
# return
|
||||
# except:
|
||||
# report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"发现重复上传,但是无法找到相关文件")
|
||||
# yield from update_ui(chatbot=chatbot, history=history)
|
||||
# else:
|
||||
# yield from update_ui_latest_msg(f"未发现重复上传", chatbot=chatbot, history=history)
|
||||
|
||||
# <-------------- convert pdf into tex ------------->
|
||||
chatbot.append([f"解析项目: {txt}", "正在将PDF转换为tex项目,请耐心等待..."])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
project_folder = pdf2tex_project(file_manifest[0], plugin_kwargs)
|
||||
if project_folder is None:
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"PDF转换为tex项目失败")
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
return False
|
||||
|
||||
# <-------------- translate latex file into Chinese ------------->
|
||||
yield from update_ui_latest_msg("正在tex项目将翻译为中文...", chatbot=chatbot, history=history)
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# <-------------- if is a zip/tar file ------------->
|
||||
project_folder = descend_to_extracted_folder_if_exist(project_folder)
|
||||
|
||||
# <-------------- move latex project away from temp folder ------------->
|
||||
from shared_utils.fastapi_server import validate_path_safety
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
project_folder = move_project(project_folder)
|
||||
|
||||
# <-------------- set a hash tag for repeat-checking ------------->
|
||||
with open(pj(project_folder, hash_tag + '.tag'), 'w', encoding='utf8') as f:
|
||||
f.write(hash_tag)
|
||||
f.close()
|
||||
|
||||
|
||||
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
|
||||
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
chatbot, history, system_prompt, mode='translate_zh',
|
||||
switch_prompt=_switch_prompt_)
|
||||
|
||||
# <-------------- compile PDF ------------->
|
||||
yield from update_ui_latest_msg("正在将翻译好的项目tex项目编译为PDF...", chatbot=chatbot, history=history)
|
||||
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
|
||||
main_file_modified='merge_translate_zh', mode='translate_zh',
|
||||
work_folder_original=project_folder, work_folder_modified=project_folder,
|
||||
work_folder=project_folder)
|
||||
|
||||
# <-------------- zip PDF ------------->
|
||||
zip_res = zip_result(project_folder)
|
||||
if success:
|
||||
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history);
|
||||
time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
else:
|
||||
chatbot.append((f"失败了",
|
||||
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体(见Github wiki) ...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history);
|
||||
time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
|
||||
# <-------------- we are done ------------->
|
||||
return success
|
||||
85
crazy_functions/Latex_Function_Wrap.py
Normal file
85
crazy_functions/Latex_Function_Wrap.py
Normal file
@@ -0,0 +1,85 @@
|
||||
|
||||
from crazy_functions.Latex_Function import Latex翻译中文并重新编译PDF, PDF翻译中文并重新编译PDF
|
||||
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
|
||||
|
||||
|
||||
class Arxiv_Localize(GptAcademicPluginTemplate):
|
||||
def __init__(self):
|
||||
"""
|
||||
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
|
||||
"""
|
||||
pass
|
||||
|
||||
def define_arg_selection_menu(self):
|
||||
"""
|
||||
定义插件的二级选项菜单
|
||||
|
||||
第一个参数,名称`main_input`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
|
||||
第二个参数,名称`advanced_arg`,参数`type`声明这是一个文本框,文本框上方显示`title`,文本框内部显示`description`,`default_value`为默认值;
|
||||
第三个参数,名称`allow_cache`,参数`type`声明这是一个下拉菜单,下拉菜单上方显示`title`+`description`,下拉菜单的选项为`options`,`default_value`为下拉菜单默认值;
|
||||
|
||||
"""
|
||||
gui_definition = {
|
||||
"main_input":
|
||||
ArgProperty(title="ArxivID", description="输入Arxiv的ID或者网址", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
|
||||
"advanced_arg":
|
||||
ArgProperty(title="额外的翻译提示词",
|
||||
description=r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
||||
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
||||
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||
default_value="", type="string").model_dump_json(), # 高级参数输入区,自动同步
|
||||
"allow_cache":
|
||||
ArgProperty(title="是否允许从缓存中调取结果", options=["允许缓存", "从头执行"], default_value="允许缓存", description="无", type="dropdown").model_dump_json(),
|
||||
"allow_cloudio":
|
||||
ArgProperty(title="是否允许从GPTAC学术云下载(或者上传)翻译结果(仅针对Arxiv论文)", options=["允许", "禁止"], default_value="禁止", description="共享文献,互助互利", type="dropdown").model_dump_json(),
|
||||
}
|
||||
return gui_definition
|
||||
|
||||
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
执行插件
|
||||
"""
|
||||
allow_cache = plugin_kwargs["allow_cache"]
|
||||
allow_cloudio = plugin_kwargs["allow_cloudio"]
|
||||
advanced_arg = plugin_kwargs["advanced_arg"]
|
||||
|
||||
if allow_cache == "从头执行": plugin_kwargs["advanced_arg"] = "--no-cache " + plugin_kwargs["advanced_arg"]
|
||||
|
||||
# 从云端下载翻译结果,以及上传翻译结果到云端;人人为我,我为人人。
|
||||
if allow_cloudio == "允许": plugin_kwargs["advanced_arg"] = "--allow-cloudio " + plugin_kwargs["advanced_arg"]
|
||||
|
||||
yield from Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||
|
||||
|
||||
|
||||
class PDF_Localize(GptAcademicPluginTemplate):
|
||||
def __init__(self):
|
||||
"""
|
||||
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
|
||||
"""
|
||||
pass
|
||||
|
||||
def define_arg_selection_menu(self):
|
||||
"""
|
||||
定义插件的二级选项菜单
|
||||
"""
|
||||
gui_definition = {
|
||||
"main_input":
|
||||
ArgProperty(title="PDF文件路径", description="未指定路径,请上传文件后,再点击该插件", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
|
||||
"advanced_arg":
|
||||
ArgProperty(title="额外的翻译提示词",
|
||||
description=r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
||||
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
||||
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||
default_value="", type="string").model_dump_json(), # 高级参数输入区,自动同步
|
||||
"method":
|
||||
ArgProperty(title="采用哪种方法执行转换", options=["MATHPIX", "DOC2X"], default_value="DOC2X", description="无", type="dropdown").model_dump_json(),
|
||||
|
||||
}
|
||||
return gui_definition
|
||||
|
||||
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
执行插件
|
||||
"""
|
||||
yield from PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||
@@ -1,6 +1,6 @@
|
||||
from toolbox import update_ui, trimmed_format_exc
|
||||
from toolbox import CatchException, report_execption, write_results_to_file, zip_folder
|
||||
|
||||
from toolbox import update_ui, trimmed_format_exc, promote_file_to_downloadzone, get_log_folder
|
||||
from toolbox import CatchException, report_exception, write_history_to_file, zip_folder
|
||||
from loguru import logger
|
||||
|
||||
class PaperFileGroup():
|
||||
def __init__(self):
|
||||
@@ -11,7 +11,7 @@ class PaperFileGroup():
|
||||
self.sp_file_tag = []
|
||||
|
||||
# count_token
|
||||
from request_llm.bridge_all import model_info
|
||||
from request_llms.bridge_all import model_info
|
||||
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||
self.get_token_num = get_token_num
|
||||
@@ -26,14 +26,14 @@ class PaperFileGroup():
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index])
|
||||
else:
|
||||
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
|
||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
||||
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
|
||||
for j, segment in enumerate(segments):
|
||||
self.sp_file_contents.append(segment)
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
|
||||
|
||||
print('Segmentation: done')
|
||||
logger.info('Segmentation: done')
|
||||
def merge_result(self):
|
||||
self.file_result = ["" for _ in range(len(self.file_paths))]
|
||||
for r, k in zip(self.sp_file_result, self.sp_file_index):
|
||||
@@ -46,20 +46,20 @@ class PaperFileGroup():
|
||||
manifest.append(path + '.polish.tex')
|
||||
f.write(res)
|
||||
return manifest
|
||||
|
||||
|
||||
def zip_result(self):
|
||||
import os, time
|
||||
folder = os.path.dirname(self.file_paths[0])
|
||||
t = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
|
||||
zip_folder(folder, './gpt_log/', f'{t}-polished.zip')
|
||||
zip_folder(folder, get_log_folder(), f'{t}-polished.zip')
|
||||
|
||||
|
||||
def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en', mode='polish'):
|
||||
import time, os, re
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
|
||||
|
||||
# <-------- 读取Latex文件,删除其中的所有注释 ---------->
|
||||
# <-------- 读取Latex文件,删除其中的所有注释 ---------->
|
||||
pfg = PaperFileGroup()
|
||||
|
||||
for index, fp in enumerate(file_manifest):
|
||||
@@ -73,31 +73,31 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
pfg.file_paths.append(fp)
|
||||
pfg.file_contents.append(clean_tex_content)
|
||||
|
||||
# <-------- 拆分过长的latex文件 ---------->
|
||||
# <-------- 拆分过长的latex文件 ---------->
|
||||
pfg.run_file_split(max_token_limit=1024)
|
||||
n_split = len(pfg.sp_file_contents)
|
||||
|
||||
|
||||
# <-------- 多线程润色开始 ---------->
|
||||
# <-------- 多线程润色开始 ---------->
|
||||
if language == 'en':
|
||||
if mode == 'polish':
|
||||
inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, " +
|
||||
"improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" +
|
||||
inputs_array = [r"Below is a section from an academic paper, polish this section to meet the academic standard, " +
|
||||
r"improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
else:
|
||||
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
|
||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
|
||||
r"Answer me only with the revised text:" +
|
||||
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
|
||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
|
||||
r"Answer me only with the revised text:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag]
|
||||
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
|
||||
elif language == 'zh':
|
||||
if mode == 'polish':
|
||||
inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
|
||||
inputs_array = [r"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
else:
|
||||
inputs_array = [f"以下是一篇学术论文中的一段内容,请对这部分内容进行语法矫正。不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_array = [r"以下是一篇学术论文中的一段内容,请对这部分内容进行语法矫正。不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag]
|
||||
sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)]
|
||||
|
||||
@@ -113,7 +113,7 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
scroller_max_len = 80
|
||||
)
|
||||
|
||||
# <-------- 文本碎片重组为完整的tex文件,整理结果为压缩包 ---------->
|
||||
# <-------- 文本碎片重组为完整的tex文件,整理结果为压缩包 ---------->
|
||||
try:
|
||||
pfg.sp_file_result = []
|
||||
for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]):
|
||||
@@ -122,29 +122,31 @@ def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
pfg.write_result()
|
||||
pfg.zip_result()
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
logger.error(trimmed_format_exc())
|
||||
|
||||
# <-------- 整理结果,退出 ---------->
|
||||
# <-------- 整理结果,退出 ---------->
|
||||
create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
|
||||
res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name)
|
||||
res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name)
|
||||
promote_file_to_downloadzone(res, chatbot=chatbot)
|
||||
|
||||
history = gpt_response_collection
|
||||
chatbot.append((f"{fp}完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"])
|
||||
"对整个Latex项目进行润色。函数插件贡献者: Binary-Husky。(注意,此插件不调用Latex,如果有Latex环境,请使用「Latex英文纠错+高亮修正位置(需Latex)插件」"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import tiktoken
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
report_exception(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
@@ -155,12 +157,12 @@ def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en')
|
||||
@@ -171,7 +173,7 @@ def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -182,7 +184,7 @@ def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
try:
|
||||
import tiktoken
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
report_exception(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
@@ -193,12 +195,12 @@ def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh')
|
||||
@@ -207,7 +209,7 @@ def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -218,7 +220,7 @@ def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
try:
|
||||
import tiktoken
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
report_exception(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
@@ -229,12 +231,12 @@ def Latex英文纠错(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en', mode='proofread')
|
||||
@@ -1,6 +1,6 @@
|
||||
from toolbox import update_ui
|
||||
from toolbox import CatchException, report_execption, write_results_to_file
|
||||
fast_debug = False
|
||||
from toolbox import update_ui, promote_file_to_downloadzone
|
||||
from toolbox import CatchException, report_exception, write_history_to_file
|
||||
from loguru import logger
|
||||
|
||||
class PaperFileGroup():
|
||||
def __init__(self):
|
||||
@@ -11,7 +11,7 @@ class PaperFileGroup():
|
||||
self.sp_file_tag = []
|
||||
|
||||
# count_token
|
||||
from request_llm.bridge_all import model_info
|
||||
from request_llms.bridge_all import model_info
|
||||
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||
self.get_token_num = get_token_num
|
||||
@@ -26,20 +26,20 @@ class PaperFileGroup():
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index])
|
||||
else:
|
||||
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
|
||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
||||
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
|
||||
for j, segment in enumerate(segments):
|
||||
self.sp_file_contents.append(segment)
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
|
||||
|
||||
print('Segmentation: done')
|
||||
logger.info('Segmentation: done')
|
||||
|
||||
def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
|
||||
import time, os, re
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
|
||||
# <-------- 读取Latex文件,删除其中的所有注释 ---------->
|
||||
# <-------- 读取Latex文件,删除其中的所有注释 ---------->
|
||||
pfg = PaperFileGroup()
|
||||
|
||||
for index, fp in enumerate(file_manifest):
|
||||
@@ -53,11 +53,11 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
pfg.file_paths.append(fp)
|
||||
pfg.file_contents.append(clean_tex_content)
|
||||
|
||||
# <-------- 拆分过长的latex文件 ---------->
|
||||
# <-------- 拆分过长的latex文件 ---------->
|
||||
pfg.run_file_split(max_token_limit=1024)
|
||||
n_split = len(pfg.sp_file_contents)
|
||||
|
||||
# <-------- 抽取摘要 ---------->
|
||||
# <-------- 抽取摘要 ---------->
|
||||
# if language == 'en':
|
||||
# abs_extract_inputs = f"Please write an abstract for this paper"
|
||||
|
||||
@@ -70,14 +70,14 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
# sys_prompt="Your job is to collect information from materials。",
|
||||
# )
|
||||
|
||||
# <-------- 多线程润色开始 ---------->
|
||||
# <-------- 多线程润色开始 ---------->
|
||||
if language == 'en->zh':
|
||||
inputs_array = ["Below is a section from an English academic paper, translate it into Chinese, do not modify any latex command such as \section, \cite and equations:" +
|
||||
inputs_array = ["Below is a section from an English academic paper, translate it into Chinese, do not modify any latex command such as \section, \cite and equations:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
|
||||
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
|
||||
elif language == 'zh->en':
|
||||
inputs_array = [f"Below is a section from a Chinese academic paper, translate it into English, do not modify any latex command such as \section, \cite and equations:" +
|
||||
inputs_array = [f"Below is a section from a Chinese academic paper, translate it into English, do not modify any latex command such as \section, \cite and equations:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
|
||||
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
|
||||
@@ -93,9 +93,10 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
scroller_max_len = 80
|
||||
)
|
||||
|
||||
# <-------- 整理结果,退出 ---------->
|
||||
# <-------- 整理结果,退出 ---------->
|
||||
create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
|
||||
res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name)
|
||||
res = write_history_to_file(gpt_response_collection, create_report_file_name)
|
||||
promote_file_to_downloadzone(res, chatbot=chatbot)
|
||||
history = gpt_response_collection
|
||||
chatbot.append((f"{fp}完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
@@ -105,7 +106,7 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -116,7 +117,7 @@ def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
|
||||
try:
|
||||
import tiktoken
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
report_exception(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
@@ -127,12 +128,12 @@ def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en->zh')
|
||||
@@ -142,7 +143,7 @@ def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -153,7 +154,7 @@ def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
|
||||
try:
|
||||
import tiktoken
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
report_exception(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
@@ -164,12 +165,12 @@ def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh->en')
|
||||
@@ -1,300 +0,0 @@
|
||||
from toolbox import update_ui, trimmed_format_exc, get_conf, objdump, objload, promote_file_to_downloadzone
|
||||
from toolbox import CatchException, report_execption, update_ui_lastest_msg, zip_result, gen_time_str
|
||||
from functools import partial
|
||||
import glob, os, requests, time
|
||||
pj = os.path.join
|
||||
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
|
||||
|
||||
# =================================== 工具函数 ===============================================
|
||||
专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
||||
def switch_prompt(pfg, mode, more_requirement):
|
||||
"""
|
||||
Generate prompts and system prompts based on the mode for proofreading or translating.
|
||||
Args:
|
||||
- pfg: Proofreader or Translator instance.
|
||||
- mode: A string specifying the mode, either 'proofread' or 'translate_zh'.
|
||||
|
||||
Returns:
|
||||
- inputs_array: A list of strings containing prompts for users to respond to.
|
||||
- sys_prompt_array: A list of strings containing prompts for system prompts.
|
||||
"""
|
||||
n_split = len(pfg.sp_file_contents)
|
||||
if mode == 'proofread_en':
|
||||
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
|
||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + more_requirement +
|
||||
r"Answer me only with the revised text:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
|
||||
elif mode == 'translate_zh':
|
||||
inputs_array = [r"Below is a section from an English academic paper, translate it into Chinese. " + more_requirement +
|
||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
|
||||
r"Answer me only with the translated text:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
sys_prompt_array = ["You are a professional translator." for _ in range(n_split)]
|
||||
else:
|
||||
assert False, "未知指令"
|
||||
return inputs_array, sys_prompt_array
|
||||
|
||||
def desend_to_extracted_folder_if_exist(project_folder):
|
||||
"""
|
||||
Descend into the extracted folder if it exists, otherwise return the original folder.
|
||||
|
||||
Args:
|
||||
- project_folder: A string specifying the folder path.
|
||||
|
||||
Returns:
|
||||
- A string specifying the path to the extracted folder, or the original folder if there is no extracted folder.
|
||||
"""
|
||||
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
|
||||
if len(maybe_dir) == 0: return project_folder
|
||||
if maybe_dir[0].endswith('.extract'): return maybe_dir[0]
|
||||
return project_folder
|
||||
|
||||
def move_project(project_folder, arxiv_id=None):
|
||||
"""
|
||||
Create a new work folder and copy the project folder to it.
|
||||
|
||||
Args:
|
||||
- project_folder: A string specifying the folder path of the project.
|
||||
|
||||
Returns:
|
||||
- A string specifying the path to the new work folder.
|
||||
"""
|
||||
import shutil, time
|
||||
time.sleep(2) # avoid time string conflict
|
||||
if arxiv_id is not None:
|
||||
new_workfolder = pj(ARXIV_CACHE_DIR, arxiv_id, 'workfolder')
|
||||
else:
|
||||
new_workfolder = f'gpt_log/{gen_time_str()}'
|
||||
try:
|
||||
shutil.rmtree(new_workfolder)
|
||||
except:
|
||||
pass
|
||||
|
||||
# align subfolder if there is a folder wrapper
|
||||
items = glob.glob(pj(project_folder,'*'))
|
||||
if len(glob.glob(pj(project_folder,'*.tex'))) == 0 and len(items) == 1:
|
||||
if os.path.isdir(items[0]): project_folder = items[0]
|
||||
|
||||
shutil.copytree(src=project_folder, dst=new_workfolder)
|
||||
return new_workfolder
|
||||
|
||||
def arxiv_download(chatbot, history, txt):
|
||||
def check_cached_translation_pdf(arxiv_id):
|
||||
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'translation')
|
||||
if not os.path.exists(translation_dir):
|
||||
os.makedirs(translation_dir)
|
||||
target_file = pj(translation_dir, 'translate_zh.pdf')
|
||||
if os.path.exists(target_file):
|
||||
promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
|
||||
return target_file
|
||||
return False
|
||||
def is_float(s):
|
||||
try:
|
||||
float(s)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
if ('.' in txt) and ('/' not in txt) and is_float(txt): # is arxiv ID
|
||||
txt = 'https://arxiv.org/abs/' + txt.strip()
|
||||
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
|
||||
txt = 'https://arxiv.org/abs/' + txt[:10]
|
||||
if not txt.startswith('https://arxiv.org'):
|
||||
return txt, None
|
||||
|
||||
# <-------------- inspect format ------------->
|
||||
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
time.sleep(1) # 刷新界面
|
||||
|
||||
url_ = txt # https://arxiv.org/abs/1707.06690
|
||||
if not txt.startswith('https://arxiv.org/abs/'):
|
||||
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}"
|
||||
yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
|
||||
return msg, None
|
||||
# <-------------- set format ------------->
|
||||
arxiv_id = url_.split('/abs/')[-1]
|
||||
if 'v' in arxiv_id: arxiv_id = arxiv_id[:10]
|
||||
cached_translation_pdf = check_cached_translation_pdf(arxiv_id)
|
||||
if cached_translation_pdf: return cached_translation_pdf, arxiv_id
|
||||
|
||||
url_tar = url_.replace('/abs/', '/e-print/')
|
||||
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
|
||||
extract_dst = pj(ARXIV_CACHE_DIR, arxiv_id, 'extract')
|
||||
os.makedirs(translation_dir, exist_ok=True)
|
||||
|
||||
# <-------------- download arxiv source file ------------->
|
||||
dst = pj(translation_dir, arxiv_id+'.tar')
|
||||
if os.path.exists(dst):
|
||||
yield from update_ui_lastest_msg("调用缓存", chatbot=chatbot, history=history) # 刷新界面
|
||||
else:
|
||||
yield from update_ui_lastest_msg("开始下载", chatbot=chatbot, history=history) # 刷新界面
|
||||
proxies, = get_conf('proxies')
|
||||
r = requests.get(url_tar, proxies=proxies)
|
||||
with open(dst, 'wb+') as f:
|
||||
f.write(r.content)
|
||||
# <-------------- extract file ------------->
|
||||
yield from update_ui_lastest_msg("下载完成", chatbot=chatbot, history=history) # 刷新界面
|
||||
from toolbox import extract_archive
|
||||
extract_archive(file_path=dst, dest_dir=extract_dst)
|
||||
return extract_dst, arxiv_id
|
||||
# ========================================= 插件主程序1 =====================================================
|
||||
|
||||
|
||||
@CatchException
|
||||
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
# <-------------- information about this plugin ------------->
|
||||
chatbot.append([ "函数插件功能?",
|
||||
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# <-------------- more requirements ------------->
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||
|
||||
# <-------------- check deps ------------->
|
||||
try:
|
||||
import glob, os, time, subprocess
|
||||
subprocess.Popen(['pdflatex', '-version'])
|
||||
from .latex_utils import Latex精细分解与转化, 编译Latex
|
||||
except Exception as e:
|
||||
chatbot.append([ f"解析项目: {txt}",
|
||||
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
# <-------------- clear history and read input ------------->
|
||||
history = []
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
# <-------------- if is a zip/tar file ------------->
|
||||
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
||||
|
||||
|
||||
# <-------------- move latex project away from temp folder ------------->
|
||||
project_folder = move_project(project_folder, arxiv_id=None)
|
||||
|
||||
|
||||
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||
if not os.path.exists(project_folder + '/merge_proofread_en.tex'):
|
||||
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
chatbot, history, system_prompt, mode='proofread_en', switch_prompt=_switch_prompt_)
|
||||
|
||||
|
||||
# <-------------- compile PDF ------------->
|
||||
success = yield from 编译Latex(chatbot, history, main_file_original='merge', main_file_modified='merge_proofread_en',
|
||||
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder=project_folder)
|
||||
|
||||
|
||||
# <-------------- zip PDF ------------->
|
||||
zip_res = zip_result(project_folder)
|
||||
if success:
|
||||
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
else:
|
||||
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
|
||||
# <-------------- we are done ------------->
|
||||
return success
|
||||
|
||||
|
||||
# ========================================= 插件主程序2 =====================================================
|
||||
|
||||
@CatchException
|
||||
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
# <-------------- information about this plugin ------------->
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# <-------------- more requirements ------------->
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||
|
||||
# <-------------- check deps ------------->
|
||||
try:
|
||||
import glob, os, time, subprocess
|
||||
subprocess.Popen(['pdflatex', '-version'])
|
||||
from .latex_utils import Latex精细分解与转化, 编译Latex
|
||||
except Exception as e:
|
||||
chatbot.append([ f"解析项目: {txt}",
|
||||
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
# <-------------- clear history and read input ------------->
|
||||
history = []
|
||||
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt)
|
||||
if txt.endswith('.pdf'):
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"发现已经存在翻译好的PDF文档")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
# <-------------- if is a zip/tar file ------------->
|
||||
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
||||
|
||||
|
||||
# <-------------- move latex project away from temp folder ------------->
|
||||
project_folder = move_project(project_folder, arxiv_id)
|
||||
|
||||
|
||||
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
|
||||
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
chatbot, history, system_prompt, mode='translate_zh', switch_prompt=_switch_prompt_)
|
||||
|
||||
|
||||
# <-------------- compile PDF ------------->
|
||||
success = yield from 编译Latex(chatbot, history, main_file_original='merge', main_file_modified='merge_translate_zh', mode='translate_zh',
|
||||
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder=project_folder)
|
||||
|
||||
# <-------------- zip PDF ------------->
|
||||
zip_res = zip_result(project_folder)
|
||||
if success:
|
||||
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
else:
|
||||
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...'))
|
||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
|
||||
|
||||
# <-------------- we are done ------------->
|
||||
return success
|
||||
@@ -1,5 +1,8 @@
|
||||
import glob, shutil, os, re
|
||||
from loguru import logger
|
||||
from toolbox import update_ui, trimmed_format_exc, gen_time_str
|
||||
from toolbox import CatchException, report_execption, write_results_to_file
|
||||
from toolbox import CatchException, report_exception, get_log_folder
|
||||
from toolbox import write_history_to_file, promote_file_to_downloadzone
|
||||
fast_debug = False
|
||||
|
||||
class PaperFileGroup():
|
||||
@@ -11,12 +14,12 @@ class PaperFileGroup():
|
||||
self.sp_file_tag = []
|
||||
|
||||
# count_token
|
||||
from request_llm.bridge_all import model_info
|
||||
from request_llms.bridge_all import model_info
|
||||
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||
self.get_token_num = get_token_num
|
||||
|
||||
def run_file_split(self, max_token_limit=1900):
|
||||
def run_file_split(self, max_token_limit=2048):
|
||||
"""
|
||||
将长文本分离开来
|
||||
"""
|
||||
@@ -26,13 +29,13 @@ class PaperFileGroup():
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index])
|
||||
else:
|
||||
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
|
||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
||||
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
|
||||
for j, segment in enumerate(segments):
|
||||
self.sp_file_contents.append(segment)
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.md")
|
||||
print('Segmentation: done')
|
||||
logger.info('Segmentation: done')
|
||||
|
||||
def merge_result(self):
|
||||
self.file_result = ["" for _ in range(len(self.file_paths))]
|
||||
@@ -42,16 +45,16 @@ class PaperFileGroup():
|
||||
def write_result(self, language):
|
||||
manifest = []
|
||||
for path, res in zip(self.file_paths, self.file_result):
|
||||
with open(path + f'.{gen_time_str()}.{language}.md', 'w', encoding='utf8') as f:
|
||||
manifest.append(path + f'.{gen_time_str()}.{language}.md')
|
||||
dst_file = os.path.join(get_log_folder(), f'{gen_time_str()}.md')
|
||||
with open(dst_file, 'w', encoding='utf8') as f:
|
||||
manifest.append(dst_file)
|
||||
f.write(res)
|
||||
return manifest
|
||||
|
||||
def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
|
||||
import time, os, re
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
|
||||
# <-------- 读取Markdown文件,删除其中的所有注释 ---------->
|
||||
# <-------- 读取Markdown文件,删除其中的所有注释 ---------->
|
||||
pfg = PaperFileGroup()
|
||||
|
||||
for index, fp in enumerate(file_manifest):
|
||||
@@ -61,26 +64,26 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
pfg.file_paths.append(fp)
|
||||
pfg.file_contents.append(file_content)
|
||||
|
||||
# <-------- 拆分过长的Markdown文件 ---------->
|
||||
pfg.run_file_split(max_token_limit=1500)
|
||||
# <-------- 拆分过长的Markdown文件 ---------->
|
||||
pfg.run_file_split(max_token_limit=1024)
|
||||
n_split = len(pfg.sp_file_contents)
|
||||
|
||||
# <-------- 多线程翻译开始 ---------->
|
||||
# <-------- 多线程翻译开始 ---------->
|
||||
if language == 'en->zh':
|
||||
inputs_array = ["This is a Markdown file, translate it into Chinese, do not modify any existing Markdown commands:" +
|
||||
inputs_array = ["This is a Markdown file, translate it into Chinese, do NOT modify any existing Markdown commands, do NOT use code wrapper (```), ONLY answer me with translated results:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
|
||||
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
|
||||
sys_prompt_array = ["You are a professional academic paper translator." + plugin_kwargs.get("additional_prompt", "") for _ in range(n_split)]
|
||||
elif language == 'zh->en':
|
||||
inputs_array = [f"This is a Markdown file, translate it into English, do not modify any existing Markdown commands:" +
|
||||
inputs_array = [f"This is a Markdown file, translate it into English, do NOT modify any existing Markdown commands, do NOT use code wrapper (```), ONLY answer me with translated results:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
|
||||
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
|
||||
sys_prompt_array = ["You are a professional academic paper translator." + plugin_kwargs.get("additional_prompt", "") for _ in range(n_split)]
|
||||
else:
|
||||
inputs_array = [f"This is a Markdown file, translate it into {language}, do not modify any existing Markdown commands, only answer me with translated results:" +
|
||||
inputs_array = [f"This is a Markdown file, translate it into {language}, do NOT modify any existing Markdown commands, do NOT use code wrapper (```), ONLY answer me with translated results:" +
|
||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||
inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag]
|
||||
sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)]
|
||||
sys_prompt_array = ["You are a professional academic paper translator." + plugin_kwargs.get("additional_prompt", "") for _ in range(n_split)]
|
||||
|
||||
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array=inputs_array,
|
||||
@@ -97,33 +100,48 @@ def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, ch
|
||||
for i_say, gpt_say in zip(gpt_response_collection[0::2], gpt_response_collection[1::2]):
|
||||
pfg.sp_file_result.append(gpt_say)
|
||||
pfg.merge_result()
|
||||
pfg.write_result(language)
|
||||
output_file_arr = pfg.write_result(language)
|
||||
for output_file in output_file_arr:
|
||||
promote_file_to_downloadzone(output_file, chatbot=chatbot)
|
||||
if 'markdown_expected_output_path' in plugin_kwargs:
|
||||
expected_f_name = plugin_kwargs['markdown_expected_output_path']
|
||||
shutil.copyfile(output_file, expected_f_name)
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
logger.error(trimmed_format_exc())
|
||||
|
||||
# <-------- 整理结果,退出 ---------->
|
||||
create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
|
||||
res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name)
|
||||
# <-------- 整理结果,退出 ---------->
|
||||
create_report_file_name = gen_time_str() + f"-chatgpt.md"
|
||||
res = write_history_to_file(gpt_response_collection, file_basename=create_report_file_name)
|
||||
promote_file_to_downloadzone(res, chatbot=chatbot)
|
||||
history = gpt_response_collection
|
||||
chatbot.append((f"{fp}完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
def get_files_from_everything(txt):
|
||||
import glob, os
|
||||
|
||||
def get_files_from_everything(txt, preference=''):
|
||||
if txt == "": return False, None, None
|
||||
success = True
|
||||
if txt.startswith('http'):
|
||||
# 网络的远程文件
|
||||
txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/")
|
||||
txt = txt.replace("/blob/", "/")
|
||||
import requests
|
||||
from toolbox import get_conf
|
||||
proxies, = get_conf('proxies')
|
||||
proxies = get_conf('proxies')
|
||||
# 网络的远程文件
|
||||
if preference == 'Github':
|
||||
logger.info('正在从github下载资源 ...')
|
||||
if not txt.endswith('.md'):
|
||||
# Make a request to the GitHub API to retrieve the repository information
|
||||
url = txt.replace("https://github.com/", "https://api.github.com/repos/") + '/readme'
|
||||
response = requests.get(url, proxies=proxies)
|
||||
txt = response.json()['download_url']
|
||||
else:
|
||||
txt = txt.replace("https://github.com/", "https://raw.githubusercontent.com/")
|
||||
txt = txt.replace("/blob/", "/")
|
||||
|
||||
r = requests.get(txt, proxies=proxies)
|
||||
with open('./gpt_log/temp.md', 'wb+') as f: f.write(r.content)
|
||||
project_folder = './gpt_log/'
|
||||
file_manifest = ['./gpt_log/temp.md']
|
||||
download_local = f'{get_log_folder(plugin_name="批量Markdown翻译")}/raw-readme-{gen_time_str()}.md'
|
||||
project_folder = f'{get_log_folder(plugin_name="批量Markdown翻译")}'
|
||||
with open(download_local, 'wb+') as f: f.write(r.content)
|
||||
file_manifest = [download_local]
|
||||
elif txt.endswith('.md'):
|
||||
# 直接给定文件
|
||||
file_manifest = [txt]
|
||||
@@ -133,13 +151,15 @@ def get_files_from_everything(txt):
|
||||
project_folder = txt
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.md', recursive=True)]
|
||||
else:
|
||||
project_folder = None
|
||||
file_manifest = []
|
||||
success = False
|
||||
|
||||
return success, file_manifest, project_folder
|
||||
|
||||
|
||||
@CatchException
|
||||
def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -149,26 +169,25 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import tiktoken
|
||||
import glob, os
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
report_exception(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
|
||||
success, file_manifest, project_folder = get_files_from_everything(txt)
|
||||
success, file_manifest, project_folder = get_files_from_everything(txt, preference="Github")
|
||||
|
||||
if not success:
|
||||
# 什么都没有
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
@@ -179,7 +198,7 @@ def Markdown英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
|
||||
|
||||
@CatchException
|
||||
def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -189,9 +208,8 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import tiktoken
|
||||
import glob, os
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
report_exception(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
@@ -201,18 +219,18 @@ def Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_p
|
||||
if not success:
|
||||
# 什么都没有
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh->en')
|
||||
|
||||
|
||||
@CatchException
|
||||
def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([
|
||||
"函数插件功能?",
|
||||
@@ -222,9 +240,8 @@ def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history,
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
import tiktoken
|
||||
import glob, os
|
||||
except:
|
||||
report_execption(chatbot, history,
|
||||
report_exception(chatbot, history,
|
||||
a=f"解析项目: {txt}",
|
||||
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
@@ -234,14 +251,14 @@ def Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history,
|
||||
if not success:
|
||||
# 什么都没有
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.md文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
|
||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||
language = plugin_kwargs.get("advanced_arg", 'Chinese')
|
||||
yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language=language)
|
||||
83
crazy_functions/PDF_Translate.py
Normal file
83
crazy_functions/PDF_Translate.py
Normal file
@@ -0,0 +1,83 @@
|
||||
from toolbox import CatchException, check_packages, get_conf
|
||||
from toolbox import update_ui, update_ui_latest_msg, disable_auto_promotion
|
||||
from toolbox import trimmed_format_exc_markdown
|
||||
from crazy_functions.crazy_utils import get_files_from_everything
|
||||
from crazy_functions.pdf_fns.parse_pdf import get_avail_grobid_url
|
||||
from crazy_functions.pdf_fns.parse_pdf_via_doc2x import 解析PDF_基于DOC2X
|
||||
from crazy_functions.pdf_fns.parse_pdf_legacy import 解析PDF_简单拆解
|
||||
from crazy_functions.pdf_fns.parse_pdf_grobid import 解析PDF_基于GROBID
|
||||
from shared_utils.colorful import *
|
||||
|
||||
@CatchException
|
||||
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
|
||||
disable_auto_promotion(chatbot)
|
||||
# 基本信息:功能、贡献者
|
||||
chatbot.append([None, "插件功能:批量翻译PDF文档。函数插件贡献者: Binary-Husky"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
try:
|
||||
check_packages(["fitz", "tiktoken", "scipdf"])
|
||||
except:
|
||||
chatbot.append([None, f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken scipdf_parser```。"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# 清空历史,以免输入溢出
|
||||
history = []
|
||||
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf')
|
||||
|
||||
# 检测输入参数,如没有给定输入参数,直接退出
|
||||
if (not success) and txt == "": txt = '空空如也的输入栏。提示:请先上传文件(把PDF文件拖入对话)。'
|
||||
|
||||
# 如果没找到任何文件
|
||||
if len(file_manifest) == 0:
|
||||
chatbot.append([None, f"找不到任何.pdf拓展名的文件: {txt}"])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
# 开始正式执行任务
|
||||
method = plugin_kwargs.get("pdf_parse_method", None)
|
||||
if method == "DOC2X":
|
||||
# ------- 第一种方法,效果最好,但是需要DOC2X服务 -------
|
||||
DOC2X_API_KEY = get_conf("DOC2X_API_KEY")
|
||||
if len(DOC2X_API_KEY) != 0:
|
||||
try:
|
||||
yield from 解析PDF_基于DOC2X(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request)
|
||||
return
|
||||
except:
|
||||
chatbot.append([None, f"DOC2X服务不可用,请检查报错详细。{trimmed_format_exc_markdown()}"])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
if method == "GROBID":
|
||||
# ------- 第二种方法,效果次优 -------
|
||||
grobid_url = get_avail_grobid_url()
|
||||
if grobid_url is not None:
|
||||
yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url)
|
||||
return
|
||||
|
||||
if method == "Classic":
|
||||
# ------- 第三种方法,早期代码,效果不理想 -------
|
||||
yield from update_ui_latest_msg("GROBID服务不可用,请检查config中的GROBID_URL。作为替代,现在将执行效果稍差的旧版代码。", chatbot, history, delay=3)
|
||||
yield from 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
return
|
||||
|
||||
if method is None:
|
||||
# ------- 以上三种方法都试一遍 -------
|
||||
DOC2X_API_KEY = get_conf("DOC2X_API_KEY")
|
||||
if len(DOC2X_API_KEY) != 0:
|
||||
try:
|
||||
yield from 解析PDF_基于DOC2X(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, DOC2X_API_KEY, user_request)
|
||||
return
|
||||
except:
|
||||
chatbot.append([None, f"DOC2X服务不可用,正在尝试GROBID。{trimmed_format_exc_markdown()}"])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
grobid_url = get_avail_grobid_url()
|
||||
if grobid_url is not None:
|
||||
yield from 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url)
|
||||
return
|
||||
yield from update_ui_latest_msg("GROBID服务不可用,请检查config中的GROBID_URL。作为替代,现在将执行效果稍差的旧版代码。", chatbot, history, delay=3)
|
||||
yield from 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
return
|
||||
|
||||
33
crazy_functions/PDF_Translate_Wrap.py
Normal file
33
crazy_functions/PDF_Translate_Wrap.py
Normal file
@@ -0,0 +1,33 @@
|
||||
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
|
||||
from .PDF_Translate import 批量翻译PDF文档
|
||||
|
||||
|
||||
class PDF_Tran(GptAcademicPluginTemplate):
|
||||
def __init__(self):
|
||||
"""
|
||||
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
|
||||
"""
|
||||
pass
|
||||
|
||||
def define_arg_selection_menu(self):
|
||||
"""
|
||||
定义插件的二级选项菜单
|
||||
"""
|
||||
gui_definition = {
|
||||
"main_input":
|
||||
ArgProperty(title="PDF文件路径", description="未指定路径,请上传文件后,再点击该插件", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
|
||||
"additional_prompt":
|
||||
ArgProperty(title="额外提示词", description="例如:对专有名词、翻译语气等方面的要求", default_value="", type="string").model_dump_json(), # 高级参数输入区,自动同步
|
||||
"pdf_parse_method":
|
||||
ArgProperty(title="PDF解析方法", options=["DOC2X", "GROBID", "Classic"], description="无", default_value="GROBID", type="dropdown").model_dump_json(),
|
||||
}
|
||||
return gui_definition
|
||||
|
||||
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
执行插件
|
||||
"""
|
||||
main_input = plugin_kwargs["main_input"]
|
||||
additional_prompt = plugin_kwargs["additional_prompt"]
|
||||
pdf_parse_method = plugin_kwargs["pdf_parse_method"]
|
||||
yield from 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||
153
crazy_functions/Rag_Interface.py
Normal file
153
crazy_functions/Rag_Interface.py
Normal file
@@ -0,0 +1,153 @@
|
||||
import os,glob
|
||||
from typing import List
|
||||
|
||||
from shared_utils.fastapi_server import validate_path_safety
|
||||
|
||||
from toolbox import report_exception
|
||||
from toolbox import CatchException, update_ui, get_conf, get_log_folder, update_ui_latest_msg
|
||||
from shared_utils.fastapi_server import validate_path_safety
|
||||
from crazy_functions.crazy_utils import input_clipping
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
|
||||
RAG_WORKER_REGISTER = {}
|
||||
MAX_HISTORY_ROUND = 5
|
||||
MAX_CONTEXT_TOKEN_LIMIT = 4096
|
||||
REMEMBER_PREVIEW = 1000
|
||||
|
||||
@CatchException
|
||||
def handle_document_upload(files: List[str], llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request, rag_worker):
|
||||
"""
|
||||
Handles document uploads by extracting text and adding it to the vector store.
|
||||
"""
|
||||
from llama_index.core import Document
|
||||
from crazy_functions.rag_fns.rag_file_support import extract_text, supports_format
|
||||
user_name = chatbot.get_user()
|
||||
checkpoint_dir = get_log_folder(user_name, plugin_name='experimental_rag')
|
||||
|
||||
for file_path in files:
|
||||
try:
|
||||
validate_path_safety(file_path, user_name)
|
||||
text = extract_text(file_path)
|
||||
if text is None:
|
||||
chatbot.append(
|
||||
[f"上传文件: {os.path.basename(file_path)}", f"文件解析失败,无法提取文本内容,请更换文件。失败原因可能为:1.文档格式过于复杂;2. 不支持的文件格式,支持的文件格式后缀有:" + ", ".join(supports_format)])
|
||||
else:
|
||||
chatbot.append(
|
||||
[f"上传文件: {os.path.basename(file_path)}", f"上传文件前50个字符为:{text[:50]}。"])
|
||||
document = Document(text=text, metadata={"source": file_path})
|
||||
rag_worker.add_documents_to_vector_store([document])
|
||||
chatbot.append([f"上传文件: {os.path.basename(file_path)}", "文件已成功添加到知识库。"])
|
||||
except Exception as e:
|
||||
report_exception(chatbot, history, a=f"处理文件: {file_path}", b=str(e))
|
||||
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
|
||||
# Main Q&A function with document upload support
|
||||
@CatchException
|
||||
def Rag问答(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
|
||||
# import vector store lib
|
||||
VECTOR_STORE_TYPE = "Milvus"
|
||||
if VECTOR_STORE_TYPE == "Milvus":
|
||||
try:
|
||||
from crazy_functions.rag_fns.milvus_worker import MilvusRagWorker as LlamaIndexRagWorker
|
||||
except:
|
||||
VECTOR_STORE_TYPE = "Simple"
|
||||
if VECTOR_STORE_TYPE == "Simple":
|
||||
from crazy_functions.rag_fns.llama_index_worker import LlamaIndexRagWorker
|
||||
|
||||
# 1. we retrieve rag worker from global context
|
||||
user_name = chatbot.get_user()
|
||||
checkpoint_dir = get_log_folder(user_name, plugin_name='experimental_rag')
|
||||
|
||||
if user_name in RAG_WORKER_REGISTER:
|
||||
rag_worker = RAG_WORKER_REGISTER[user_name]
|
||||
else:
|
||||
rag_worker = RAG_WORKER_REGISTER[user_name] = LlamaIndexRagWorker(
|
||||
user_name,
|
||||
llm_kwargs,
|
||||
checkpoint_dir=checkpoint_dir,
|
||||
auto_load_checkpoint=True
|
||||
)
|
||||
|
||||
current_context = f"{VECTOR_STORE_TYPE} @ {checkpoint_dir}"
|
||||
tip = "提示:输入“清空向量数据库”可以清空RAG向量数据库"
|
||||
|
||||
# 2. Handle special commands
|
||||
if os.path.exists(txt) and os.path.isdir(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
# Extract file paths from the user input
|
||||
# Assuming the user inputs file paths separated by commas after the command
|
||||
file_paths = [f for f in glob.glob(f'{project_folder}/**/*', recursive=True)]
|
||||
chatbot.append([txt, f'正在处理上传的文档 ({current_context}) ...'])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
yield from handle_document_upload(file_paths, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request, rag_worker)
|
||||
return
|
||||
|
||||
elif txt == "清空向量数据库":
|
||||
chatbot.append([txt, f'正在清空 ({current_context}) ...'])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
rag_worker.purge_vector_store()
|
||||
yield from update_ui_latest_msg('已清空', chatbot, history, delay=0) # 刷新界面
|
||||
return
|
||||
|
||||
# 3. Normal Q&A processing
|
||||
chatbot.append([txt, f'正在召回知识 ({current_context}) ...'])
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 4. Clip history to reduce token consumption
|
||||
txt_origin = txt
|
||||
|
||||
if len(history) > MAX_HISTORY_ROUND * 2:
|
||||
history = history[-(MAX_HISTORY_ROUND * 2):]
|
||||
txt_clip, history, flags = input_clipping(txt, history, max_token_limit=MAX_CONTEXT_TOKEN_LIMIT, return_clip_flags=True)
|
||||
input_is_clipped_flag = (flags["original_input_len"] != flags["clipped_input_len"])
|
||||
|
||||
# 5. If input is clipped, add input to vector store before retrieve
|
||||
if input_is_clipped_flag:
|
||||
yield from update_ui_latest_msg('检测到长输入, 正在向量化 ...', chatbot, history, delay=0) # 刷新界面
|
||||
# Save input to vector store
|
||||
rag_worker.add_text_to_vector_store(txt_origin)
|
||||
yield from update_ui_latest_msg('向量化完成 ...', chatbot, history, delay=0) # 刷新界面
|
||||
|
||||
if len(txt_origin) > REMEMBER_PREVIEW:
|
||||
HALF = REMEMBER_PREVIEW // 2
|
||||
i_say_to_remember = txt[:HALF] + f" ...\n...(省略{len(txt_origin)-REMEMBER_PREVIEW}字)...\n... " + txt[-HALF:]
|
||||
if (flags["original_input_len"] - flags["clipped_input_len"]) > HALF:
|
||||
txt_clip = txt_clip + f" ...\n...(省略{len(txt_origin)-len(txt_clip)-HALF}字)...\n... " + txt[-HALF:]
|
||||
else:
|
||||
i_say_to_remember = i_say = txt_clip
|
||||
else:
|
||||
i_say_to_remember = i_say = txt_clip
|
||||
|
||||
# 6. Search vector store and build prompts
|
||||
nodes = rag_worker.retrieve_from_store_with_query(i_say)
|
||||
prompt = rag_worker.build_prompt(query=i_say, nodes=nodes)
|
||||
# 7. Query language model
|
||||
if len(chatbot) != 0:
|
||||
chatbot.pop(-1) # Pop temp chat, because we are going to add them again inside `request_gpt_model_in_new_thread_with_ui_alive`
|
||||
|
||||
model_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=prompt,
|
||||
inputs_show_user=i_say,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history=history,
|
||||
sys_prompt=system_prompt,
|
||||
retry_times_at_unknown_error=0
|
||||
)
|
||||
|
||||
# 8. Remember Q&A
|
||||
yield from update_ui_latest_msg(
|
||||
model_say + '</br></br>' + f'对话记忆中, 请稍等 ({current_context}) ...',
|
||||
chatbot, history, delay=0.5
|
||||
)
|
||||
rag_worker.remember_qa(i_say_to_remember, model_say)
|
||||
history.extend([i_say, model_say])
|
||||
|
||||
# 9. Final UI Update
|
||||
yield from update_ui_latest_msg(model_say, chatbot, history, delay=0, msg=tip)
|
||||
167
crazy_functions/Social_Helper.py
Normal file
167
crazy_functions/Social_Helper.py
Normal file
@@ -0,0 +1,167 @@
|
||||
import pickle, os, random
|
||||
from toolbox import CatchException, update_ui, get_conf, get_log_folder, update_ui_latest_msg
|
||||
from crazy_functions.crazy_utils import input_clipping
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.json_fns.select_tool import structure_output, select_tool
|
||||
from pydantic import BaseModel, Field
|
||||
from loguru import logger
|
||||
from typing import List
|
||||
|
||||
|
||||
SOCIAL_NETWORK_WORKER_REGISTER = {}
|
||||
|
||||
class SocialNetwork():
|
||||
def __init__(self):
|
||||
self.people = []
|
||||
|
||||
class SaveAndLoad():
|
||||
def __init__(self, user_name, llm_kwargs, auto_load_checkpoint=True, checkpoint_dir=None) -> None:
|
||||
self.user_name = user_name
|
||||
self.checkpoint_dir = checkpoint_dir
|
||||
if auto_load_checkpoint:
|
||||
self.social_network = self.load_from_checkpoint(checkpoint_dir)
|
||||
else:
|
||||
self.social_network = SocialNetwork()
|
||||
|
||||
def does_checkpoint_exist(self, checkpoint_dir=None):
|
||||
import os, glob
|
||||
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
|
||||
if not os.path.exists(checkpoint_dir): return False
|
||||
if len(glob.glob(os.path.join(checkpoint_dir, "social_network.pkl"))) == 0: return False
|
||||
return True
|
||||
|
||||
def save_to_checkpoint(self, checkpoint_dir=None):
|
||||
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
|
||||
with open(os.path.join(checkpoint_dir, 'social_network.pkl'), "wb+") as f:
|
||||
pickle.dump(self.social_network, f)
|
||||
return
|
||||
|
||||
def load_from_checkpoint(self, checkpoint_dir=None):
|
||||
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
|
||||
if self.does_checkpoint_exist(checkpoint_dir=checkpoint_dir):
|
||||
with open(os.path.join(checkpoint_dir, 'social_network.pkl'), "rb") as f:
|
||||
social_network = pickle.load(f)
|
||||
return social_network
|
||||
else:
|
||||
return SocialNetwork()
|
||||
|
||||
|
||||
class Friend(BaseModel):
|
||||
friend_name: str = Field(description="name of a friend")
|
||||
friend_description: str = Field(description="description of a friend (everything about this friend)")
|
||||
friend_relationship: str = Field(description="The relationship with a friend (e.g. friend, family, colleague)")
|
||||
|
||||
class FriendList(BaseModel):
|
||||
friends_list: List[Friend] = Field(description="The list of friends")
|
||||
|
||||
|
||||
class SocialNetworkWorker(SaveAndLoad):
|
||||
def ai_socail_advice(self, prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, run_gpt_fn, intention_type):
|
||||
pass
|
||||
|
||||
def ai_remove_friend(self, prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, run_gpt_fn, intention_type):
|
||||
pass
|
||||
|
||||
def ai_list_friends(self, prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, run_gpt_fn, intention_type):
|
||||
pass
|
||||
|
||||
def ai_add_multi_friends(self, prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, run_gpt_fn, intention_type):
|
||||
friend, err_msg = structure_output(
|
||||
txt=prompt,
|
||||
prompt="根据提示, 解析多个联系人的身份信息\n\n",
|
||||
err_msg=f"不能理解该联系人",
|
||||
run_gpt_fn=run_gpt_fn,
|
||||
pydantic_cls=FriendList
|
||||
)
|
||||
if friend.friends_list:
|
||||
for f in friend.friends_list:
|
||||
self.add_friend(f)
|
||||
msg = f"成功添加{len(friend.friends_list)}个联系人: {str(friend.friends_list)}"
|
||||
yield from update_ui_latest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=0)
|
||||
|
||||
|
||||
def run(self, txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
prompt = txt
|
||||
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
|
||||
self.tools_to_select = {
|
||||
"SocialAdvice":{
|
||||
"explain_to_llm": "如果用户希望获取社交指导,调用SocialAdvice生成一些社交建议",
|
||||
"callback": self.ai_socail_advice,
|
||||
},
|
||||
"AddFriends":{
|
||||
"explain_to_llm": "如果用户给出了联系人,调用AddMultiFriends把联系人添加到数据库",
|
||||
"callback": self.ai_add_multi_friends,
|
||||
},
|
||||
"RemoveFriend":{
|
||||
"explain_to_llm": "如果用户希望移除某个联系人,调用RemoveFriend",
|
||||
"callback": self.ai_remove_friend,
|
||||
},
|
||||
"ListFriends":{
|
||||
"explain_to_llm": "如果用户列举联系人,调用ListFriends",
|
||||
"callback": self.ai_list_friends,
|
||||
}
|
||||
}
|
||||
|
||||
try:
|
||||
Explanation = '\n'.join([f'{k}: {v["explain_to_llm"]}' for k, v in self.tools_to_select.items()])
|
||||
class UserSociaIntention(BaseModel):
|
||||
intention_type: str = Field(
|
||||
description=
|
||||
f"The type of user intention. You must choose from {self.tools_to_select.keys()}.\n\n"
|
||||
f"Explanation:\n{Explanation}",
|
||||
default="SocialAdvice"
|
||||
)
|
||||
pydantic_cls_instance, err_msg = select_tool(
|
||||
prompt=txt,
|
||||
run_gpt_fn=run_gpt_fn,
|
||||
pydantic_cls=UserSociaIntention
|
||||
)
|
||||
except Exception as e:
|
||||
yield from update_ui_latest_msg(
|
||||
lastmsg=f"无法理解用户意图 {err_msg}",
|
||||
chatbot=chatbot,
|
||||
history=history,
|
||||
delay=0
|
||||
)
|
||||
return
|
||||
|
||||
intention_type = pydantic_cls_instance.intention_type
|
||||
intention_callback = self.tools_to_select[pydantic_cls_instance.intention_type]['callback']
|
||||
yield from intention_callback(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, run_gpt_fn, intention_type)
|
||||
|
||||
|
||||
def add_friend(self, friend):
|
||||
# check whether the friend is already in the social network
|
||||
for f in self.social_network.people:
|
||||
if f.friend_name == friend.friend_name:
|
||||
f.friend_description = friend.friend_description
|
||||
f.friend_relationship = friend.friend_relationship
|
||||
logger.info(f"Repeated friend, update info: {friend}")
|
||||
return
|
||||
logger.info(f"Add a new friend: {friend}")
|
||||
self.social_network.people.append(friend)
|
||||
return
|
||||
|
||||
|
||||
@CatchException
|
||||
def I人助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
|
||||
# 1. we retrieve worker from global context
|
||||
user_name = chatbot.get_user()
|
||||
checkpoint_dir=get_log_folder(user_name, plugin_name='experimental_rag')
|
||||
if user_name in SOCIAL_NETWORK_WORKER_REGISTER:
|
||||
social_network_worker = SOCIAL_NETWORK_WORKER_REGISTER[user_name]
|
||||
else:
|
||||
social_network_worker = SOCIAL_NETWORK_WORKER_REGISTER[user_name] = SocialNetworkWorker(
|
||||
user_name,
|
||||
llm_kwargs,
|
||||
checkpoint_dir=checkpoint_dir,
|
||||
auto_load_checkpoint=True
|
||||
)
|
||||
|
||||
# 2. save
|
||||
yield from social_network_worker.run(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||
social_network_worker.save_to_checkpoint(checkpoint_dir)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
from toolbox import update_ui
|
||||
from toolbox import CatchException, report_execption, write_results_to_file
|
||||
from .crazy_utils import input_clipping
|
||||
from toolbox import update_ui, promote_file_to_downloadzone
|
||||
from toolbox import CatchException, report_exception, write_history_to_file
|
||||
from shared_utils.fastapi_server import validate_path_safety
|
||||
from crazy_functions.crazy_utils import input_clipping
|
||||
|
||||
def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
||||
import os, copy
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
msg = '正常'
|
||||
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
|
||||
summary_batch_isolation = True
|
||||
inputs_array = []
|
||||
inputs_show_user_array = []
|
||||
@@ -22,7 +23,7 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
file_content = f.read()
|
||||
prefix = "接下来请你逐文件分析下面的工程" if index==0 else ""
|
||||
i_say = prefix + f'请对下面的程序文件做一个概述文件名是{os.path.relpath(fp, project_folder)},文件代码是 ```{file_content}```'
|
||||
i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {os.path.abspath(fp)}'
|
||||
i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {fp}'
|
||||
# 装载请求内容
|
||||
inputs_array.append(i_say)
|
||||
inputs_show_user_array.append(i_say_show_user)
|
||||
@@ -43,7 +44,8 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
# 全部文件解析完成,结果写入文件,准备对工程源代码进行汇总分析
|
||||
report_part_1 = copy.deepcopy(gpt_response_collection)
|
||||
history_to_return = report_part_1
|
||||
res = write_results_to_file(report_part_1)
|
||||
res = write_history_to_file(report_part_1)
|
||||
promote_file_to_downloadzone(res, chatbot=chatbot)
|
||||
chatbot.append(("完成?", "逐个文件分析已完成。" + res + "\n\n正在开始汇总。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面
|
||||
|
||||
@@ -80,12 +82,13 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
|
||||
history=this_iteration_history_feed, # 迭代之前的分析
|
||||
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
|
||||
|
||||
summary = "请用一句话概括这些文件的整体功能"
|
||||
|
||||
diagram_code = make_diagram(this_iteration_files, result, this_iteration_history_feed)
|
||||
summary = "请用一句话概括这些文件的整体功能。\n\n" + diagram_code
|
||||
summary_result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=summary,
|
||||
inputs_show_user=summary,
|
||||
llm_kwargs=llm_kwargs,
|
||||
inputs=summary,
|
||||
inputs_show_user=summary,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history=[i_say, result], # 迭代之前的分析
|
||||
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。" + sys_prompt_additional)
|
||||
@@ -97,73 +100,97 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||
|
||||
############################## <END> ##################################
|
||||
history_to_return.extend(report_part_2)
|
||||
res = write_results_to_file(history_to_return)
|
||||
res = write_history_to_file(history_to_return)
|
||||
promote_file_to_downloadzone(res, chatbot=chatbot)
|
||||
chatbot.append(("完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history_to_return) # 刷新界面
|
||||
|
||||
def make_diagram(this_iteration_files, result, this_iteration_history_feed):
|
||||
from crazy_functions.diagram_fns.file_tree import build_file_tree_mermaid_diagram
|
||||
return build_file_tree_mermaid_diagram(this_iteration_history_feed[0::2], this_iteration_history_feed[1::2], "项目示意图")
|
||||
|
||||
@CatchException
|
||||
def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析项目本身(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob
|
||||
file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \
|
||||
[f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]+ \
|
||||
[f for f in glob.glob('./request_llm/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]
|
||||
file_manifest = [f for f in glob.glob('./*.py')] + \
|
||||
[f for f in glob.glob('./*/*.py')]
|
||||
project_folder = './'
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个Matlab项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析Matlab项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.m', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_exception(chatbot, history, a = f"解析Matlab项目: {txt}", b = f"找不到任何`.m`源文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个C项目的头文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.hpp', recursive=True)] #+ \
|
||||
# [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] + \
|
||||
@@ -171,21 +198,22 @@ def 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.hpp', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.java', recursive=True)] + \
|
||||
@@ -193,21 +221,22 @@ def 解析一个Java项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.xml', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.sh', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何java文件: {txt}")
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何java文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.ts', recursive=True)] + \
|
||||
@@ -222,21 +251,22 @@ def 解析一个前端项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.css', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.jsx', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何前端相关文件: {txt}")
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何前端相关文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.go', recursive=True)] + \
|
||||
@@ -244,40 +274,42 @@ def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
||||
[f for f in glob.glob(f'{project_folder}/**/go.sum', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/go.work', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何golang文件: {txt}")
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何golang文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个Rust项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.rs', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.toml', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.lock', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何golang文件: {txt}")
|
||||
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何golang文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
@CatchException
|
||||
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.lua', recursive=True)] + \
|
||||
@@ -285,34 +317,35 @@ def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.json', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.toml', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何lua文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何lua文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.cs', recursive=True)] + \
|
||||
[f for f in glob.glob(f'{project_folder}/**/*.csproj', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何CSharp文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何CSharp文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
|
||||
|
||||
@CatchException
|
||||
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
txt_pattern = plugin_kwargs.get("advanced_arg")
|
||||
txt_pattern = txt_pattern.replace(",", ",")
|
||||
# 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
|
||||
@@ -322,18 +355,22 @@ def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")]
|
||||
pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件
|
||||
# 将要忽略匹配的文件名(例如: ^README.md)
|
||||
pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", "\.") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")]
|
||||
pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", r"\.") # 移除左边通配符,移除右侧逗号,转义点号
|
||||
for _ in txt_pattern.split(" ") # 以空格分割
|
||||
if (_ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")) # ^开始,但不是^*.开始
|
||||
]
|
||||
# 生成正则表达式
|
||||
pattern_except = '/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$'
|
||||
pattern_except = r'/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$'
|
||||
pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else ''
|
||||
|
||||
history.clear()
|
||||
import glob, os, re
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
# 若上传压缩文件, 先寻找到解压的文件夹路径, 从而避免解析压缩文件
|
||||
@@ -346,7 +383,7 @@ def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys
|
||||
file_manifest = [f for pattern in pattern_include for f in glob.glob(f'{extract_folder_path}/**/{pattern}', recursive=True) if "" != extract_folder_path and \
|
||||
os.path.isfile(f) and (not re.search(pattern_except, f) or pattern.endswith('.' + re.search(pattern_except, f).group().split('.')[-1]))]
|
||||
if len(file_manifest) == 0:
|
||||
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何文件: {txt}")
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
162
crazy_functions/SourceCode_Comment.py
Normal file
162
crazy_functions/SourceCode_Comment.py
Normal file
@@ -0,0 +1,162 @@
|
||||
import os, copy, time
|
||||
from toolbox import CatchException, report_exception, update_ui, zip_result, promote_file_to_downloadzone, update_ui_latest_msg, get_conf, generate_file_link
|
||||
from shared_utils.fastapi_server import validate_path_safety
|
||||
from crazy_functions.crazy_utils import input_clipping
|
||||
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from crazy_functions.agent_fns.python_comment_agent import PythonCodeComment
|
||||
from crazy_functions.diagram_fns.file_tree import FileNode
|
||||
from crazy_functions.agent_fns.watchdog import WatchDog
|
||||
from shared_utils.advanced_markdown_format import markdown_convertion_for_file
|
||||
from loguru import logger
|
||||
|
||||
|
||||
def 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
||||
|
||||
summary_batch_isolation = True
|
||||
inputs_array = []
|
||||
inputs_show_user_array = []
|
||||
history_array = []
|
||||
sys_prompt_array = []
|
||||
|
||||
assert len(file_manifest) <= 512, "源文件太多(超过512个), 请缩减输入文件的数量。或者,您也可以选择删除此行警告,并修改代码拆分file_manifest列表,从而实现分批次处理。"
|
||||
|
||||
# 建立文件树
|
||||
file_tree_struct = FileNode("root", build_manifest=True)
|
||||
for file_path in file_manifest:
|
||||
file_tree_struct.add_file(file_path, file_path)
|
||||
|
||||
# <第一步,逐个文件分析,多线程>
|
||||
lang = "" if not plugin_kwargs["use_chinese"] else " (you must use Chinese)"
|
||||
for index, fp in enumerate(file_manifest):
|
||||
# 读取文件
|
||||
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
|
||||
file_content = f.read()
|
||||
prefix = ""
|
||||
i_say = prefix + f'Please conclude the following source code at {os.path.relpath(fp, project_folder)} with only one sentence{lang}, the code is:\n```{file_content}```'
|
||||
i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 请用一句话对下面的程序文件做一个整体概述: {fp}'
|
||||
# 装载请求内容
|
||||
MAX_TOKEN_SINGLE_FILE = 2560
|
||||
i_say, _ = input_clipping(inputs=i_say, history=[], max_token_limit=MAX_TOKEN_SINGLE_FILE)
|
||||
inputs_array.append(i_say)
|
||||
inputs_show_user_array.append(i_say_show_user)
|
||||
history_array.append([])
|
||||
sys_prompt_array.append(f"You are a software architecture analyst analyzing a source code project. Do not dig into details, tell me what the code is doing in general. Your answer must be short, simple and clear{lang}.")
|
||||
# 文件读取完成,对每一个源代码文件,生成一个请求线程,发送到大模型进行分析
|
||||
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array = inputs_array,
|
||||
inputs_show_user_array = inputs_show_user_array,
|
||||
history_array = history_array,
|
||||
sys_prompt_array = sys_prompt_array,
|
||||
llm_kwargs = llm_kwargs,
|
||||
chatbot = chatbot,
|
||||
show_user_at_complete = True
|
||||
)
|
||||
|
||||
# <第二步,逐个文件分析,生成带注释文件>
|
||||
tasks = ["" for _ in range(len(file_manifest))]
|
||||
def bark_fn(tasks):
|
||||
for i in range(len(tasks)): tasks[i] = "watchdog is dead"
|
||||
wd = WatchDog(timeout=10, bark_fn=lambda: bark_fn(tasks), interval=3, msg="ThreadWatcher timeout")
|
||||
wd.begin_watch()
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
executor = ThreadPoolExecutor(max_workers=get_conf('DEFAULT_WORKER_NUM'))
|
||||
def _task_multi_threading(i_say, gpt_say, fp, file_tree_struct, index):
|
||||
language = 'Chinese' if plugin_kwargs["use_chinese"] else 'English'
|
||||
def observe_window_update(x):
|
||||
if tasks[index] == "watchdog is dead":
|
||||
raise TimeoutError("ThreadWatcher: watchdog is dead")
|
||||
tasks[index] = x
|
||||
pcc = PythonCodeComment(llm_kwargs, plugin_kwargs, language=language, observe_window_update=observe_window_update)
|
||||
pcc.read_file(path=fp, brief=gpt_say)
|
||||
revised_path, revised_content = pcc.begin_comment_source_code(None, None)
|
||||
file_tree_struct.manifest[fp].revised_path = revised_path
|
||||
file_tree_struct.manifest[fp].revised_content = revised_content
|
||||
# <将结果写回源文件>
|
||||
with open(fp, 'w', encoding='utf-8') as f:
|
||||
f.write(file_tree_struct.manifest[fp].revised_content)
|
||||
# <生成对比html>
|
||||
with open("crazy_functions/agent_fns/python_comment_compare.html", 'r', encoding='utf-8') as f:
|
||||
html_template = f.read()
|
||||
warp = lambda x: "```python\n\n" + x + "\n\n```"
|
||||
from themes.theme import load_dynamic_theme
|
||||
_, advanced_css, _, _ = load_dynamic_theme("Default")
|
||||
html_template = html_template.replace("ADVANCED_CSS", advanced_css)
|
||||
html_template = html_template.replace("REPLACE_CODE_FILE_LEFT", pcc.get_markdown_block_in_html(markdown_convertion_for_file(warp(pcc.original_content))))
|
||||
html_template = html_template.replace("REPLACE_CODE_FILE_RIGHT", pcc.get_markdown_block_in_html(markdown_convertion_for_file(warp(revised_content))))
|
||||
compare_html_path = fp + '.compare.html'
|
||||
file_tree_struct.manifest[fp].compare_html = compare_html_path
|
||||
with open(compare_html_path, 'w', encoding='utf-8') as f:
|
||||
f.write(html_template)
|
||||
tasks[index] = ""
|
||||
|
||||
chatbot.append([None, f"正在处理:"])
|
||||
futures = []
|
||||
index = 0
|
||||
for i_say, gpt_say, fp in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], file_manifest):
|
||||
future = executor.submit(_task_multi_threading, i_say, gpt_say, fp, file_tree_struct, index)
|
||||
index += 1
|
||||
futures.append(future)
|
||||
|
||||
# <第三步,等待任务完成>
|
||||
cnt = 0
|
||||
while True:
|
||||
cnt += 1
|
||||
wd.feed()
|
||||
time.sleep(3)
|
||||
worker_done = [h.done() for h in futures]
|
||||
remain = len(worker_done) - sum(worker_done)
|
||||
|
||||
# <展示已经完成的部分>
|
||||
preview_html_list = []
|
||||
for done, fp in zip(worker_done, file_manifest):
|
||||
if not done: continue
|
||||
if hasattr(file_tree_struct.manifest[fp], 'compare_html'):
|
||||
preview_html_list.append(file_tree_struct.manifest[fp].compare_html)
|
||||
else:
|
||||
logger.error(f"文件: {fp} 的注释结果未能成功")
|
||||
file_links = generate_file_link(preview_html_list)
|
||||
|
||||
yield from update_ui_latest_msg(
|
||||
f"当前任务: <br/>{'<br/>'.join(tasks)}.<br/>" +
|
||||
f"剩余源文件数量: {remain}.<br/>" +
|
||||
f"已完成的文件: {sum(worker_done)}.<br/>" +
|
||||
file_links +
|
||||
"<br/>" +
|
||||
''.join(['.']*(cnt % 10 + 1)
|
||||
), chatbot=chatbot, history=history, delay=0)
|
||||
yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
|
||||
if all(worker_done):
|
||||
executor.shutdown()
|
||||
break
|
||||
|
||||
# <第四步,压缩结果>
|
||||
zip_res = zip_result(project_folder)
|
||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||
|
||||
# <END>
|
||||
chatbot.append((None, "所有源文件均已处理完毕。"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
|
||||
@CatchException
|
||||
def 注释Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
history = [] # 清空历史,以免输入溢出
|
||||
plugin_kwargs["use_chinese"] = plugin_kwargs.get("use_chinese", False)
|
||||
import glob, os
|
||||
if os.path.exists(txt):
|
||||
project_folder = txt
|
||||
validate_path_safety(project_folder, chatbot.get_user())
|
||||
else:
|
||||
if txt == "": txt = '空空如也的输入栏'
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)]
|
||||
if len(file_manifest) == 0:
|
||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}")
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return
|
||||
|
||||
yield from 注释源代码(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
||||
36
crazy_functions/SourceCode_Comment_Wrap.py
Normal file
36
crazy_functions/SourceCode_Comment_Wrap.py
Normal file
@@ -0,0 +1,36 @@
|
||||
|
||||
from toolbox import get_conf, update_ui
|
||||
from crazy_functions.plugin_template.plugin_class_template import GptAcademicPluginTemplate, ArgProperty
|
||||
from crazy_functions.SourceCode_Comment import 注释Python项目
|
||||
|
||||
class SourceCodeComment_Wrap(GptAcademicPluginTemplate):
|
||||
def __init__(self):
|
||||
"""
|
||||
请注意`execute`会执行在不同的线程中,因此您在定义和使用类变量时,应当慎之又慎!
|
||||
"""
|
||||
pass
|
||||
|
||||
def define_arg_selection_menu(self):
|
||||
"""
|
||||
定义插件的二级选项菜单
|
||||
"""
|
||||
gui_definition = {
|
||||
"main_input":
|
||||
ArgProperty(title="路径", description="程序路径(上传文件后自动填写)", default_value="", type="string").model_dump_json(), # 主输入,自动从输入框同步
|
||||
"use_chinese":
|
||||
ArgProperty(title="注释语言", options=["英文", "中文"], default_value="英文", description="无", type="dropdown").model_dump_json(),
|
||||
# "use_emoji":
|
||||
# ArgProperty(title="在注释中使用emoji", options=["禁止", "允许"], default_value="禁止", description="无", type="dropdown").model_dump_json(),
|
||||
}
|
||||
return gui_definition
|
||||
|
||||
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
"""
|
||||
执行插件
|
||||
"""
|
||||
if plugin_kwargs["use_chinese"] == "中文":
|
||||
plugin_kwargs["use_chinese"] = True
|
||||
else:
|
||||
plugin_kwargs["use_chinese"] = False
|
||||
|
||||
yield from 注释Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||||
204
crazy_functions/VideoResource_GPT.py
Normal file
204
crazy_functions/VideoResource_GPT.py
Normal file
@@ -0,0 +1,204 @@
|
||||
import requests
|
||||
import random
|
||||
import time
|
||||
import re
|
||||
import json
|
||||
from bs4 import BeautifulSoup
|
||||
from functools import lru_cache
|
||||
from itertools import zip_longest
|
||||
from check_proxy import check_proxy
|
||||
from toolbox import CatchException, update_ui, get_conf, promote_file_to_downloadzone, update_ui_latest_msg, generate_file_link
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
|
||||
from request_llms.bridge_all import model_info
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.prompts.internet import SearchOptimizerPrompt, SearchAcademicOptimizerPrompt
|
||||
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
|
||||
from textwrap import dedent
|
||||
from loguru import logger
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class Query(BaseModel):
|
||||
search_keyword: str = Field(description="search query for video resource")
|
||||
|
||||
|
||||
class VideoResource(BaseModel):
|
||||
thought: str = Field(description="analysis of the search results based on the user's query")
|
||||
title: str = Field(description="title of the video")
|
||||
author: str = Field(description="author/uploader of the video")
|
||||
bvid: str = Field(description="unique ID of the video")
|
||||
another_failsafe_bvid: str = Field(description="provide another bvid, the other one is not working")
|
||||
|
||||
|
||||
def get_video_resource(search_keyword):
|
||||
from crazy_functions.media_fns.get_media import search_videos
|
||||
|
||||
# Search for videos and return the first result
|
||||
videos = search_videos(
|
||||
search_keyword
|
||||
)
|
||||
|
||||
# Return the first video if results exist, otherwise return None
|
||||
return videos
|
||||
|
||||
def download_video(bvid, user_name, chatbot, history):
|
||||
# from experimental_mods.get_bilibili_resource import download_bilibili
|
||||
from crazy_functions.media_fns.get_media import download_video
|
||||
# pause a while
|
||||
tic_time = 8
|
||||
for i in range(tic_time):
|
||||
yield from update_ui_latest_msg(
|
||||
lastmsg=f"即将下载音频。等待{tic_time-i}秒后自动继续, 点击“停止”键取消此操作。",
|
||||
chatbot=chatbot, history=[], delay=1)
|
||||
|
||||
# download audio
|
||||
chatbot.append((None, "下载音频, 请稍等...")); yield from update_ui(chatbot=chatbot, history=history)
|
||||
downloaded_files = yield from download_video(bvid, only_audio=True, user_name=user_name, chatbot=chatbot, history=history)
|
||||
|
||||
if len(downloaded_files) == 0:
|
||||
# failed to download audio
|
||||
return []
|
||||
|
||||
# preview
|
||||
preview_list = [promote_file_to_downloadzone(fp) for fp in downloaded_files]
|
||||
file_links = generate_file_link(preview_list)
|
||||
yield from update_ui_latest_msg(f"已完成的文件: <br/>" + file_links, chatbot=chatbot, history=history, delay=0)
|
||||
chatbot.append((None, f"即将下载视频。"))
|
||||
|
||||
# pause a while
|
||||
tic_time = 16
|
||||
for i in range(tic_time):
|
||||
yield from update_ui_latest_msg(
|
||||
lastmsg=f"即将下载视频。等待{tic_time-i}秒后自动继续, 点击“停止”键取消此操作。",
|
||||
chatbot=chatbot, history=[], delay=1)
|
||||
|
||||
# download video
|
||||
chatbot.append((None, "下载视频, 请稍等...")); yield from update_ui(chatbot=chatbot, history=history)
|
||||
downloaded_files_part2 = yield from download_video(bvid, only_audio=False, user_name=user_name, chatbot=chatbot, history=history)
|
||||
|
||||
# preview
|
||||
preview_list = [promote_file_to_downloadzone(fp) for fp in downloaded_files_part2]
|
||||
file_links = generate_file_link(preview_list)
|
||||
yield from update_ui_latest_msg(f"已完成的文件: <br/>" + file_links, chatbot=chatbot, history=history, delay=0)
|
||||
|
||||
# return
|
||||
return downloaded_files + downloaded_files_part2
|
||||
|
||||
|
||||
class Strategy(BaseModel):
|
||||
thought: str = Field(description="analysis of the user's wish, for example, can you recall the name of the resource?")
|
||||
which_methods: str = Field(description="Which method to use to find the necessary information? choose from 'method_1' and 'method_2'.")
|
||||
method_1_search_keywords: str = Field(description="Generate keywords to search the internet if you choose method 1, otherwise empty.")
|
||||
method_2_generate_keywords: str = Field(description="Generate keywords for video download engine if you choose method 2, otherwise empty.")
|
||||
|
||||
|
||||
@CatchException
|
||||
def 多媒体任务(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
user_wish: str = txt
|
||||
# query demos:
|
||||
# - "我想找一首歌,里面有句歌词是“turn your face towards the sun”"
|
||||
# - "一首歌,第一句是红豆生南国"
|
||||
# - "一首音乐,中国航天任务专用的那首"
|
||||
# - "戴森球计划在熔岩星球的音乐"
|
||||
# - "hanser的百变什么精"
|
||||
# - "打大圣残躯时的bgm"
|
||||
# - "渊下宫战斗音乐"
|
||||
|
||||
# 搜索
|
||||
chatbot.append((txt, "检索中, 请稍等..."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
if "跳过联网搜索" not in user_wish:
|
||||
# 结构化生成
|
||||
internet_search_keyword = user_wish
|
||||
|
||||
yield from update_ui_latest_msg(lastmsg=f"发起互联网检索: {internet_search_keyword} ...", chatbot=chatbot, history=[], delay=1)
|
||||
from crazy_functions.Internet_GPT import internet_search_with_analysis_prompt
|
||||
result = yield from internet_search_with_analysis_prompt(
|
||||
prompt=internet_search_keyword,
|
||||
analysis_prompt="请根据搜索结果分析,获取用户需要找的资源的名称、作者、出处等信息。",
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot
|
||||
)
|
||||
|
||||
yield from update_ui_latest_msg(lastmsg=f"互联网检索结论: {result} \n\n 正在生成进一步检索方案 ...", chatbot=chatbot, history=[], delay=1)
|
||||
rf_req = dedent(f"""
|
||||
The user wish to get the following resource:
|
||||
{user_wish}
|
||||
Meanwhile, you can access another expert's opinion on the user's wish:
|
||||
{result}
|
||||
Generate search keywords (less than 5 keywords) for video download engine accordingly.
|
||||
""")
|
||||
else:
|
||||
user_wish = user_wish.replace("跳过联网搜索", "").strip()
|
||||
rf_req = dedent(f"""
|
||||
The user wish to get the following resource:
|
||||
{user_wish}
|
||||
Generate research keywords (less than 5 keywords) accordingly.
|
||||
""")
|
||||
gpt_json_io = GptJsonIO(Query)
|
||||
inputs = rf_req + gpt_json_io.format_instructions
|
||||
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
|
||||
analyze_res = run_gpt_fn(inputs, "")
|
||||
logger.info(analyze_res)
|
||||
query: Query = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
|
||||
video_engine_keywords = query.search_keyword
|
||||
# 关键词展示
|
||||
chatbot.append((None, f"检索关键词已确认: {video_engine_keywords}。筛选中, 请稍等..."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 获取候选资源
|
||||
candidate_dictionary: dict = get_video_resource(video_engine_keywords)
|
||||
candidate_dictionary_as_str = json.dumps(candidate_dictionary, ensure_ascii=False, indent=4)
|
||||
|
||||
# 展示候选资源
|
||||
candidate_display = "\n".join([f"{i+1}. {it['title']}" for i, it in enumerate(candidate_dictionary)])
|
||||
chatbot.append((None, f"候选:\n\n{candidate_display}"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 结构化生成
|
||||
rf_req_2 = dedent(f"""
|
||||
The user wish to get the following resource:
|
||||
{user_wish}
|
||||
|
||||
Select the most relevant and suitable video resource from the following search results:
|
||||
{candidate_dictionary_as_str}
|
||||
|
||||
Note:
|
||||
1. The first several search video results are more likely to satisfy the user's wish.
|
||||
2. The time duration of the video should be less than 10 minutes.
|
||||
3. You should analyze the search results first, before giving your answer.
|
||||
4. Use Chinese if possible.
|
||||
5. Beside the primary video selection, give a backup video resource `bvid`.
|
||||
""")
|
||||
gpt_json_io = GptJsonIO(VideoResource)
|
||||
inputs = rf_req_2 + gpt_json_io.format_instructions
|
||||
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
|
||||
analyze_res = run_gpt_fn(inputs, "")
|
||||
logger.info(analyze_res)
|
||||
video_resource: VideoResource = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
|
||||
|
||||
# Display
|
||||
chatbot.append(
|
||||
(None,
|
||||
f"分析:{video_resource.thought}" "<br/>"
|
||||
f"选择: `{video_resource.title}`。" "<br/>"
|
||||
f"作者:{video_resource.author}"
|
||||
)
|
||||
)
|
||||
chatbot.append((None, f"下载中, 请稍等..."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
if video_resource and video_resource.bvid:
|
||||
logger.info(video_resource)
|
||||
downloaded = yield from download_video(video_resource.bvid, chatbot.get_user(), chatbot, history)
|
||||
if not downloaded:
|
||||
chatbot.append((None, f"下载失败, 尝试备选 ..."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
downloaded = yield from download_video(video_resource.another_failsafe_bvid, chatbot.get_user(), chatbot, history)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@CatchException
|
||||
def debug(bvid, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
yield from download_video(bvid, chatbot.get_user(), chatbot, history)
|
||||
23
crazy_functions/agent_fns/auto_agent.py
Normal file
23
crazy_functions/agent_fns/auto_agent.py
Normal file
@@ -0,0 +1,23 @@
|
||||
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, ProxyNetworkActivate
|
||||
from toolbox import report_exception, get_log_folder, update_ui_latest_msg, Singleton
|
||||
from crazy_functions.agent_fns.pipe import PluginMultiprocessManager, PipeCom
|
||||
from crazy_functions.agent_fns.general import AutoGenGeneral
|
||||
|
||||
|
||||
|
||||
class AutoGenMath(AutoGenGeneral):
|
||||
|
||||
def define_agents(self):
|
||||
from autogen import AssistantAgent, UserProxyAgent
|
||||
return [
|
||||
{
|
||||
"name": "assistant", # name of the agent.
|
||||
"cls": AssistantAgent, # class of the agent.
|
||||
},
|
||||
{
|
||||
"name": "user_proxy", # name of the agent.
|
||||
"cls": UserProxyAgent, # class of the agent.
|
||||
"human_input_mode": "ALWAYS", # always ask for human input.
|
||||
"llm_config": False, # disables llm-based auto reply.
|
||||
},
|
||||
]
|
||||
20
crazy_functions/agent_fns/echo_agent.py
Normal file
20
crazy_functions/agent_fns/echo_agent.py
Normal file
@@ -0,0 +1,20 @@
|
||||
from crazy_functions.agent_fns.pipe import PluginMultiprocessManager, PipeCom
|
||||
from loguru import logger
|
||||
|
||||
class EchoDemo(PluginMultiprocessManager):
|
||||
def subprocess_worker(self, child_conn):
|
||||
# ⭐⭐ 子进程
|
||||
self.child_conn = child_conn
|
||||
while True:
|
||||
msg = self.child_conn.recv() # PipeCom
|
||||
if msg.cmd == "user_input":
|
||||
# wait father user input
|
||||
self.child_conn.send(PipeCom("show", msg.content))
|
||||
wait_success = self.subprocess_worker_wait_user_feedback(wait_msg="我准备好处理下一个问题了.")
|
||||
if not wait_success:
|
||||
# wait timeout, terminate this subprocess_worker
|
||||
break
|
||||
elif msg.cmd == "terminate":
|
||||
self.child_conn.send(PipeCom("done", ""))
|
||||
break
|
||||
logger.info('[debug] subprocess_worker terminated')
|
||||
138
crazy_functions/agent_fns/general.py
Normal file
138
crazy_functions/agent_fns/general.py
Normal file
@@ -0,0 +1,138 @@
|
||||
from toolbox import trimmed_format_exc, get_conf, ProxyNetworkActivate
|
||||
from crazy_functions.agent_fns.pipe import PluginMultiprocessManager, PipeCom
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
import time
|
||||
|
||||
def gpt_academic_generate_oai_reply(
|
||||
self,
|
||||
messages,
|
||||
sender,
|
||||
config,
|
||||
):
|
||||
llm_config = self.llm_config if config is None else config
|
||||
if llm_config is False:
|
||||
return False, None
|
||||
if messages is None:
|
||||
messages = self._oai_messages[sender]
|
||||
|
||||
inputs = messages[-1]['content']
|
||||
history = []
|
||||
for message in messages[:-1]:
|
||||
history.append(message['content'])
|
||||
context=messages[-1].pop("context", None)
|
||||
assert context is None, "预留参数 context 未实现"
|
||||
|
||||
reply = predict_no_ui_long_connection(
|
||||
inputs=inputs,
|
||||
llm_kwargs=llm_config,
|
||||
history=history,
|
||||
sys_prompt=self._oai_system_message[0]['content'],
|
||||
console_silence=True
|
||||
)
|
||||
assumed_done = reply.endswith('\nTERMINATE')
|
||||
return True, reply
|
||||
|
||||
class AutoGenGeneral(PluginMultiprocessManager):
|
||||
def gpt_academic_print_override(self, user_proxy, message, sender):
|
||||
# ⭐⭐ run in subprocess
|
||||
try:
|
||||
print_msg = sender.name + "\n\n---\n\n" + message["content"]
|
||||
except:
|
||||
print_msg = sender.name + "\n\n---\n\n" + message
|
||||
self.child_conn.send(PipeCom("show", print_msg))
|
||||
|
||||
def gpt_academic_get_human_input(self, user_proxy, message):
|
||||
# ⭐⭐ run in subprocess
|
||||
patience = 300
|
||||
begin_waiting_time = time.time()
|
||||
self.child_conn.send(PipeCom("interact", message))
|
||||
while True:
|
||||
time.sleep(0.5)
|
||||
if self.child_conn.poll():
|
||||
wait_success = True
|
||||
break
|
||||
if time.time() - begin_waiting_time > patience:
|
||||
self.child_conn.send(PipeCom("done", ""))
|
||||
wait_success = False
|
||||
break
|
||||
if wait_success:
|
||||
return self.child_conn.recv().content
|
||||
else:
|
||||
raise TimeoutError("等待用户输入超时")
|
||||
|
||||
def define_agents(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def exe_autogen(self, input):
|
||||
# ⭐⭐ run in subprocess
|
||||
input = input.content
|
||||
code_execution_config = {"work_dir": self.autogen_work_dir, "use_docker": self.use_docker}
|
||||
agents = self.define_agents()
|
||||
user_proxy = None
|
||||
assistant = None
|
||||
for agent_kwargs in agents:
|
||||
agent_cls = agent_kwargs.pop('cls')
|
||||
kwargs = {
|
||||
'llm_config':self.llm_kwargs,
|
||||
'code_execution_config':code_execution_config
|
||||
}
|
||||
kwargs.update(agent_kwargs)
|
||||
agent_handle = agent_cls(**kwargs)
|
||||
agent_handle._print_received_message = lambda a,b: self.gpt_academic_print_override(agent_kwargs, a, b)
|
||||
for d in agent_handle._reply_func_list:
|
||||
if hasattr(d['reply_func'],'__name__') and d['reply_func'].__name__ == 'generate_oai_reply':
|
||||
d['reply_func'] = gpt_academic_generate_oai_reply
|
||||
if agent_kwargs['name'] == 'user_proxy':
|
||||
agent_handle.get_human_input = lambda a: self.gpt_academic_get_human_input(user_proxy, a)
|
||||
user_proxy = agent_handle
|
||||
if agent_kwargs['name'] == 'assistant': assistant = agent_handle
|
||||
try:
|
||||
if user_proxy is None or assistant is None: raise Exception("用户代理或助理代理未定义")
|
||||
with ProxyNetworkActivate("AutoGen"):
|
||||
user_proxy.initiate_chat(assistant, message=input)
|
||||
except Exception as e:
|
||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||
self.child_conn.send(PipeCom("done", "AutoGen 执行失败: \n\n" + tb_str))
|
||||
|
||||
def subprocess_worker(self, child_conn):
|
||||
# ⭐⭐ run in subprocess
|
||||
self.child_conn = child_conn
|
||||
while True:
|
||||
msg = self.child_conn.recv() # PipeCom
|
||||
self.exe_autogen(msg)
|
||||
|
||||
|
||||
class AutoGenGroupChat(AutoGenGeneral):
|
||||
def exe_autogen(self, input):
|
||||
# ⭐⭐ run in subprocess
|
||||
import autogen
|
||||
|
||||
input = input.content
|
||||
with ProxyNetworkActivate("AutoGen"):
|
||||
code_execution_config = {"work_dir": self.autogen_work_dir, "use_docker": self.use_docker}
|
||||
agents = self.define_agents()
|
||||
agents_instances = []
|
||||
for agent_kwargs in agents:
|
||||
agent_cls = agent_kwargs.pop("cls")
|
||||
kwargs = {"code_execution_config": code_execution_config}
|
||||
kwargs.update(agent_kwargs)
|
||||
agent_handle = agent_cls(**kwargs)
|
||||
agent_handle._print_received_message = lambda a, b: self.gpt_academic_print_override(agent_kwargs, a, b)
|
||||
agents_instances.append(agent_handle)
|
||||
if agent_kwargs["name"] == "user_proxy":
|
||||
user_proxy = agent_handle
|
||||
user_proxy.get_human_input = lambda a: self.gpt_academic_get_human_input(user_proxy, a)
|
||||
try:
|
||||
groupchat = autogen.GroupChat(agents=agents_instances, messages=[], max_round=50)
|
||||
manager = autogen.GroupChatManager(groupchat=groupchat, **self.define_group_chat_manager_config())
|
||||
manager._print_received_message = lambda a, b: self.gpt_academic_print_override(agent_kwargs, a, b)
|
||||
manager.get_human_input = lambda a: self.gpt_academic_get_human_input(manager, a)
|
||||
if user_proxy is None:
|
||||
raise Exception("user_proxy is not defined")
|
||||
user_proxy.initiate_chat(manager, message=input)
|
||||
except Exception:
|
||||
tb_str = "```\n" + trimmed_format_exc() + "```"
|
||||
self.child_conn.send(PipeCom("done", "AutoGen exe failed: \n\n" + tb_str))
|
||||
|
||||
def define_group_chat_manager_config(self):
|
||||
raise NotImplementedError
|
||||
16
crazy_functions/agent_fns/persistent.py
Normal file
16
crazy_functions/agent_fns/persistent.py
Normal file
@@ -0,0 +1,16 @@
|
||||
from toolbox import Singleton
|
||||
@Singleton
|
||||
class GradioMultiuserManagerForPersistentClasses():
|
||||
def __init__(self):
|
||||
self.mapping = {}
|
||||
|
||||
def already_alive(self, key):
|
||||
return (key in self.mapping) and (self.mapping[key].is_alive())
|
||||
|
||||
def set(self, key, x):
|
||||
self.mapping[key] = x
|
||||
return self.mapping[key]
|
||||
|
||||
def get(self, key):
|
||||
return self.mapping[key]
|
||||
|
||||
195
crazy_functions/agent_fns/pipe.py
Normal file
195
crazy_functions/agent_fns/pipe.py
Normal file
@@ -0,0 +1,195 @@
|
||||
from toolbox import get_log_folder, update_ui, gen_time_str, get_conf, promote_file_to_downloadzone
|
||||
from crazy_functions.agent_fns.watchdog import WatchDog
|
||||
from loguru import logger
|
||||
import time, os
|
||||
|
||||
class PipeCom:
|
||||
def __init__(self, cmd, content) -> None:
|
||||
self.cmd = cmd
|
||||
self.content = content
|
||||
|
||||
|
||||
class PluginMultiprocessManager:
|
||||
def __init__(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
# ⭐ run in main process
|
||||
self.autogen_work_dir = os.path.join(get_log_folder("autogen"), gen_time_str())
|
||||
self.previous_work_dir_files = {}
|
||||
self.llm_kwargs = llm_kwargs
|
||||
self.plugin_kwargs = plugin_kwargs
|
||||
self.chatbot = chatbot
|
||||
self.history = history
|
||||
self.system_prompt = system_prompt
|
||||
# self.user_request = user_request
|
||||
self.alive = True
|
||||
self.use_docker = get_conf("AUTOGEN_USE_DOCKER")
|
||||
self.last_user_input = ""
|
||||
# create a thread to monitor self.heartbeat, terminate the instance if no heartbeat for a long time
|
||||
timeout_seconds = 5 * 60
|
||||
self.heartbeat_watchdog = WatchDog(timeout=timeout_seconds, bark_fn=self.terminate, interval=5)
|
||||
self.heartbeat_watchdog.begin_watch()
|
||||
|
||||
def feed_heartbeat_watchdog(self):
|
||||
# feed this `dog`, so the dog will not `bark` (bark_fn will terminate the instance)
|
||||
self.heartbeat_watchdog.feed()
|
||||
|
||||
def is_alive(self):
|
||||
return self.alive
|
||||
|
||||
def launch_subprocess_with_pipe(self):
|
||||
# ⭐ run in main process
|
||||
from multiprocessing import Process, Pipe
|
||||
|
||||
parent_conn, child_conn = Pipe()
|
||||
self.p = Process(target=self.subprocess_worker, args=(child_conn,))
|
||||
self.p.daemon = True
|
||||
self.p.start()
|
||||
return parent_conn
|
||||
|
||||
def terminate(self):
|
||||
self.p.terminate()
|
||||
self.alive = False
|
||||
logger.info("[debug] instance terminated")
|
||||
|
||||
def subprocess_worker(self, child_conn):
|
||||
# ⭐⭐ run in subprocess
|
||||
raise NotImplementedError
|
||||
|
||||
def send_command(self, cmd):
|
||||
# ⭐ run in main process
|
||||
repeated = False
|
||||
if cmd == self.last_user_input:
|
||||
repeated = True
|
||||
cmd = ""
|
||||
else:
|
||||
self.last_user_input = cmd
|
||||
self.parent_conn.send(PipeCom("user_input", cmd))
|
||||
return repeated, cmd
|
||||
|
||||
def immediate_showoff_when_possible(self, fp):
|
||||
# ⭐ 主进程
|
||||
# 获取fp的拓展名
|
||||
file_type = fp.split('.')[-1]
|
||||
# 如果是文本文件, 则直接显示文本内容
|
||||
if file_type.lower() in ['png', 'jpg']:
|
||||
image_path = os.path.abspath(fp)
|
||||
self.chatbot.append([
|
||||
'检测到新生图像:',
|
||||
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
|
||||
])
|
||||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||||
|
||||
def overwatch_workdir_file_change(self):
|
||||
# ⭐ 主进程 Docker 外挂文件夹监控
|
||||
path_to_overwatch = self.autogen_work_dir
|
||||
change_list = []
|
||||
# 扫描路径下的所有文件, 并与self.previous_work_dir_files中所记录的文件进行对比,
|
||||
# 如果有新文件出现,或者文件的修改时间发生变化,则更新self.previous_work_dir_files中
|
||||
# 把新文件和发生变化的文件的路径记录到 change_list 中
|
||||
for root, dirs, files in os.walk(path_to_overwatch):
|
||||
for file in files:
|
||||
file_path = os.path.join(root, file)
|
||||
if file_path not in self.previous_work_dir_files.keys():
|
||||
last_modified_time = os.stat(file_path).st_mtime
|
||||
self.previous_work_dir_files.update({file_path: last_modified_time})
|
||||
change_list.append(file_path)
|
||||
else:
|
||||
last_modified_time = os.stat(file_path).st_mtime
|
||||
if last_modified_time != self.previous_work_dir_files[file_path]:
|
||||
self.previous_work_dir_files[file_path] = last_modified_time
|
||||
change_list.append(file_path)
|
||||
if len(change_list) > 0:
|
||||
file_links = ""
|
||||
for f in change_list:
|
||||
res = promote_file_to_downloadzone(f)
|
||||
file_links += f'<br/><a href="file={res}" target="_blank">{res}</a>'
|
||||
yield from self.immediate_showoff_when_possible(f)
|
||||
|
||||
self.chatbot.append(['检测到新生文档.', f'文档清单如下: {file_links}'])
|
||||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||||
return change_list
|
||||
|
||||
|
||||
def main_process_ui_control(self, txt, create_or_resume) -> str:
|
||||
# ⭐ 主进程
|
||||
if create_or_resume == 'create':
|
||||
self.cnt = 1
|
||||
self.parent_conn = self.launch_subprocess_with_pipe() # ⭐⭐⭐
|
||||
repeated, cmd_to_autogen = self.send_command(txt)
|
||||
if txt == 'exit':
|
||||
self.chatbot.append([f"结束", "结束信号已明确,终止AutoGen程序。"])
|
||||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||||
self.terminate()
|
||||
return "terminate"
|
||||
|
||||
# patience = 10
|
||||
|
||||
while True:
|
||||
time.sleep(0.5)
|
||||
if not self.alive:
|
||||
# the heartbeat watchdog might have it killed
|
||||
self.terminate()
|
||||
return "terminate"
|
||||
if self.parent_conn.poll():
|
||||
self.feed_heartbeat_watchdog()
|
||||
if "[GPT-Academic] 等待中" in self.chatbot[-1][-1]:
|
||||
self.chatbot.pop(-1) # remove the last line
|
||||
if "等待您的进一步指令" in self.chatbot[-1][-1]:
|
||||
self.chatbot.pop(-1) # remove the last line
|
||||
if '[GPT-Academic] 等待中' in self.chatbot[-1][-1]:
|
||||
self.chatbot.pop(-1) # remove the last line
|
||||
msg = self.parent_conn.recv() # PipeCom
|
||||
if msg.cmd == "done":
|
||||
self.chatbot.append([f"结束", msg.content])
|
||||
self.cnt += 1
|
||||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||||
self.terminate()
|
||||
break
|
||||
if msg.cmd == "show":
|
||||
yield from self.overwatch_workdir_file_change()
|
||||
notice = ""
|
||||
if repeated: notice = "(自动忽略重复的输入)"
|
||||
self.chatbot.append([f"运行阶段-{self.cnt}(上次用户反馈输入为: 「{cmd_to_autogen}」{notice}", msg.content])
|
||||
self.cnt += 1
|
||||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||||
if msg.cmd == "interact":
|
||||
yield from self.overwatch_workdir_file_change()
|
||||
self.chatbot.append([f"程序抵达用户反馈节点.", msg.content +
|
||||
"\n\n等待您的进一步指令." +
|
||||
"\n\n(1) 一般情况下您不需要说什么, 清空输入区, 然后直接点击“提交”以继续. " +
|
||||
"\n\n(2) 如果您需要补充些什么, 输入要反馈的内容, 直接点击“提交”以继续. " +
|
||||
"\n\n(3) 如果您想终止程序, 输入exit, 直接点击“提交”以终止AutoGen并解锁. "
|
||||
])
|
||||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||||
# do not terminate here, leave the subprocess_worker instance alive
|
||||
return "wait_feedback"
|
||||
else:
|
||||
self.feed_heartbeat_watchdog()
|
||||
if '[GPT-Academic] 等待中' not in self.chatbot[-1][-1]:
|
||||
# begin_waiting_time = time.time()
|
||||
self.chatbot.append(["[GPT-Academic] 等待AutoGen执行结果 ...", "[GPT-Academic] 等待中"])
|
||||
self.chatbot[-1] = [self.chatbot[-1][0], self.chatbot[-1][1].replace("[GPT-Academic] 等待中", "[GPT-Academic] 等待中.")]
|
||||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||||
# if time.time() - begin_waiting_time > patience:
|
||||
# self.chatbot.append([f"结束", "等待超时, 终止AutoGen程序。"])
|
||||
# yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||||
# self.terminate()
|
||||
# return "terminate"
|
||||
|
||||
self.terminate()
|
||||
return "terminate"
|
||||
|
||||
def subprocess_worker_wait_user_feedback(self, wait_msg="wait user feedback"):
|
||||
# ⭐⭐ run in subprocess
|
||||
patience = 5 * 60
|
||||
begin_waiting_time = time.time()
|
||||
self.child_conn.send(PipeCom("interact", wait_msg))
|
||||
while True:
|
||||
time.sleep(0.5)
|
||||
if self.child_conn.poll():
|
||||
wait_success = True
|
||||
break
|
||||
if time.time() - begin_waiting_time > patience:
|
||||
self.child_conn.send(PipeCom("done", ""))
|
||||
wait_success = False
|
||||
break
|
||||
return wait_success
|
||||
457
crazy_functions/agent_fns/python_comment_agent.py
Normal file
457
crazy_functions/agent_fns/python_comment_agent.py
Normal file
@@ -0,0 +1,457 @@
|
||||
import datetime
|
||||
import re
|
||||
import os
|
||||
from loguru import logger
|
||||
from textwrap import dedent
|
||||
from toolbox import CatchException, update_ui
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
|
||||
# TODO: 解决缩进问题
|
||||
|
||||
find_function_end_prompt = '''
|
||||
Below is a page of code that you need to read. This page may not yet complete, you job is to split this page to separate functions, class functions etc.
|
||||
- Provide the line number where the first visible function ends.
|
||||
- Provide the line number where the next visible function begins.
|
||||
- If there are no other functions in this page, you should simply return the line number of the last line.
|
||||
- Only focus on functions declared by `def` keyword. Ignore inline functions. Ignore function calls.
|
||||
|
||||
------------------ Example ------------------
|
||||
INPUT:
|
||||
|
||||
```
|
||||
L0000 |import sys
|
||||
L0001 |import re
|
||||
L0002 |
|
||||
L0003 |def trimmed_format_exc():
|
||||
L0004 | import os
|
||||
L0005 | import traceback
|
||||
L0006 | str = traceback.format_exc()
|
||||
L0007 | current_path = os.getcwd()
|
||||
L0008 | replace_path = "."
|
||||
L0009 | return str.replace(current_path, replace_path)
|
||||
L0010 |
|
||||
L0011 |
|
||||
L0012 |def trimmed_format_exc_markdown():
|
||||
L0013 | ...
|
||||
L0014 | ...
|
||||
```
|
||||
|
||||
OUTPUT:
|
||||
|
||||
```
|
||||
<first_function_end_at>L0009</first_function_end_at>
|
||||
<next_function_begin_from>L0012</next_function_begin_from>
|
||||
```
|
||||
|
||||
------------------ End of Example ------------------
|
||||
|
||||
|
||||
------------------ the real INPUT you need to process NOW ------------------
|
||||
```
|
||||
{THE_TAGGED_CODE}
|
||||
```
|
||||
'''
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
revise_function_prompt = '''
|
||||
You need to read the following code, and revise the source code ({FILE_BASENAME}) according to following instructions:
|
||||
1. You should analyze the purpose of the functions (if there are any).
|
||||
2. You need to add docstring for the provided functions (if there are any).
|
||||
|
||||
Be aware:
|
||||
1. You must NOT modify the indent of code.
|
||||
2. You are NOT authorized to change or translate non-comment code, and you are NOT authorized to add empty lines either, toggle qu.
|
||||
3. Use {LANG} to add comments and docstrings. Do NOT translate Chinese that is already in the code.
|
||||
4. Besides adding a docstring, use the ⭐ symbol to annotate the most core and important line of code within the function, explaining its role.
|
||||
|
||||
------------------ Example ------------------
|
||||
INPUT:
|
||||
```
|
||||
L0000 |
|
||||
L0001 |def zip_result(folder):
|
||||
L0002 | t = gen_time_str()
|
||||
L0003 | zip_folder(folder, get_log_folder(), f"result.zip")
|
||||
L0004 | return os.path.join(get_log_folder(), f"result.zip")
|
||||
L0005 |
|
||||
L0006 |
|
||||
```
|
||||
|
||||
OUTPUT:
|
||||
|
||||
<instruction_1_purpose>
|
||||
This function compresses a given folder, and return the path of the resulting `zip` file.
|
||||
</instruction_1_purpose>
|
||||
<instruction_2_revised_code>
|
||||
```
|
||||
def zip_result(folder):
|
||||
"""
|
||||
Compresses the specified folder into a zip file and stores it in the log folder.
|
||||
|
||||
Args:
|
||||
folder (str): The path to the folder that needs to be compressed.
|
||||
|
||||
Returns:
|
||||
str: The path to the created zip file in the log folder.
|
||||
"""
|
||||
t = gen_time_str()
|
||||
zip_folder(folder, get_log_folder(), f"result.zip") # ⭐ Execute the zipping of folder
|
||||
return os.path.join(get_log_folder(), f"result.zip")
|
||||
```
|
||||
</instruction_2_revised_code>
|
||||
------------------ End of Example ------------------
|
||||
|
||||
|
||||
------------------ the real INPUT you need to process NOW ({FILE_BASENAME}) ------------------
|
||||
```
|
||||
{THE_CODE}
|
||||
```
|
||||
{INDENT_REMINDER}
|
||||
{BRIEF_REMINDER}
|
||||
{HINT_REMINDER}
|
||||
'''
|
||||
|
||||
|
||||
revise_function_prompt_chinese = '''
|
||||
您需要阅读以下代码,并根据以下说明修订源代码({FILE_BASENAME}):
|
||||
1. 如果源代码中包含函数的话, 你应该分析给定函数实现了什么功能
|
||||
2. 如果源代码中包含函数的话, 你需要为函数添加docstring, docstring必须使用中文
|
||||
|
||||
请注意:
|
||||
1. 你不得修改代码的缩进
|
||||
2. 你无权更改或翻译代码中的非注释部分,也不允许添加空行
|
||||
3. 使用 {LANG} 添加注释和文档字符串。不要翻译代码中已有的中文
|
||||
4. 除了添加docstring之外, 使用⭐符号给该函数中最核心、最重要的一行代码添加注释,并说明其作用
|
||||
|
||||
------------------ 示例 ------------------
|
||||
INPUT:
|
||||
```
|
||||
L0000 |
|
||||
L0001 |def zip_result(folder):
|
||||
L0002 | t = gen_time_str()
|
||||
L0003 | zip_folder(folder, get_log_folder(), f"result.zip")
|
||||
L0004 | return os.path.join(get_log_folder(), f"result.zip")
|
||||
L0005 |
|
||||
L0006 |
|
||||
```
|
||||
|
||||
OUTPUT:
|
||||
|
||||
<instruction_1_purpose>
|
||||
该函数用于压缩指定文件夹,并返回生成的`zip`文件的路径。
|
||||
</instruction_1_purpose>
|
||||
<instruction_2_revised_code>
|
||||
```
|
||||
def zip_result(folder):
|
||||
"""
|
||||
该函数将指定的文件夹压缩成ZIP文件, 并将其存储在日志文件夹中。
|
||||
|
||||
输入参数:
|
||||
folder (str): 需要压缩的文件夹的路径。
|
||||
返回值:
|
||||
str: 日志文件夹中创建的ZIP文件的路径。
|
||||
"""
|
||||
t = gen_time_str()
|
||||
zip_folder(folder, get_log_folder(), f"result.zip") # ⭐ 执行文件夹的压缩
|
||||
return os.path.join(get_log_folder(), f"result.zip")
|
||||
```
|
||||
</instruction_2_revised_code>
|
||||
------------------ End of Example ------------------
|
||||
|
||||
|
||||
------------------ the real INPUT you need to process NOW ({FILE_BASENAME}) ------------------
|
||||
```
|
||||
{THE_CODE}
|
||||
```
|
||||
{INDENT_REMINDER}
|
||||
{BRIEF_REMINDER}
|
||||
{HINT_REMINDER}
|
||||
'''
|
||||
|
||||
|
||||
class PythonCodeComment():
|
||||
|
||||
def __init__(self, llm_kwargs, plugin_kwargs, language, observe_window_update) -> None:
|
||||
self.original_content = ""
|
||||
self.full_context = []
|
||||
self.full_context_with_line_no = []
|
||||
self.current_page_start = 0
|
||||
self.page_limit = 100 # 100 lines of code each page
|
||||
self.ignore_limit = 20
|
||||
self.llm_kwargs = llm_kwargs
|
||||
self.plugin_kwargs = plugin_kwargs
|
||||
self.language = language
|
||||
self.observe_window_update = observe_window_update
|
||||
if self.language == "chinese":
|
||||
self.core_prompt = revise_function_prompt_chinese
|
||||
else:
|
||||
self.core_prompt = revise_function_prompt
|
||||
self.path = None
|
||||
self.file_basename = None
|
||||
self.file_brief = ""
|
||||
|
||||
def generate_tagged_code_from_full_context(self):
|
||||
for i, code in enumerate(self.full_context):
|
||||
number = i
|
||||
padded_number = f"{number:04}"
|
||||
result = f"L{padded_number}"
|
||||
self.full_context_with_line_no.append(f"{result} | {code}")
|
||||
return self.full_context_with_line_no
|
||||
|
||||
def read_file(self, path, brief):
|
||||
with open(path, 'r', encoding='utf8') as f:
|
||||
self.full_context = f.readlines()
|
||||
self.original_content = ''.join(self.full_context)
|
||||
self.file_basename = os.path.basename(path)
|
||||
self.file_brief = brief
|
||||
self.full_context_with_line_no = self.generate_tagged_code_from_full_context()
|
||||
self.path = path
|
||||
|
||||
def find_next_function_begin(self, tagged_code:list, begin_and_end):
|
||||
begin, end = begin_and_end
|
||||
THE_TAGGED_CODE = ''.join(tagged_code)
|
||||
self.llm_kwargs['temperature'] = 0
|
||||
result = predict_no_ui_long_connection(
|
||||
inputs=find_function_end_prompt.format(THE_TAGGED_CODE=THE_TAGGED_CODE),
|
||||
llm_kwargs=self.llm_kwargs,
|
||||
history=[],
|
||||
sys_prompt="",
|
||||
observe_window=[],
|
||||
console_silence=True
|
||||
)
|
||||
|
||||
def extract_number(text):
|
||||
# 使用正则表达式匹配模式
|
||||
match = re.search(r'<next_function_begin_from>L(\d+)</next_function_begin_from>', text)
|
||||
if match:
|
||||
# 提取匹配的数字部分并转换为整数
|
||||
return int(match.group(1))
|
||||
return None
|
||||
|
||||
line_no = extract_number(result)
|
||||
if line_no is not None:
|
||||
return line_no
|
||||
else:
|
||||
return end
|
||||
|
||||
def _get_next_window(self):
|
||||
#
|
||||
current_page_start = self.current_page_start
|
||||
|
||||
if self.current_page_start == len(self.full_context) + 1:
|
||||
raise StopIteration
|
||||
|
||||
# 如果剩余的行数非常少,一鼓作气处理掉
|
||||
if len(self.full_context) - self.current_page_start < self.ignore_limit:
|
||||
future_page_start = len(self.full_context) + 1
|
||||
self.current_page_start = future_page_start
|
||||
return current_page_start, future_page_start
|
||||
|
||||
|
||||
tagged_code = self.full_context_with_line_no[ self.current_page_start: self.current_page_start + self.page_limit]
|
||||
line_no = self.find_next_function_begin(tagged_code, [self.current_page_start, self.current_page_start + self.page_limit])
|
||||
|
||||
if line_no > len(self.full_context) - 5:
|
||||
line_no = len(self.full_context) + 1
|
||||
|
||||
future_page_start = line_no
|
||||
self.current_page_start = future_page_start
|
||||
|
||||
# ! consider eof
|
||||
return current_page_start, future_page_start
|
||||
|
||||
def dedent(self, text):
|
||||
"""Remove any common leading whitespace from every line in `text`.
|
||||
"""
|
||||
# Look for the longest leading string of spaces and tabs common to
|
||||
# all lines.
|
||||
margin = None
|
||||
_whitespace_only_re = re.compile('^[ \t]+$', re.MULTILINE)
|
||||
_leading_whitespace_re = re.compile('(^[ \t]*)(?:[^ \t\n])', re.MULTILINE)
|
||||
text = _whitespace_only_re.sub('', text)
|
||||
indents = _leading_whitespace_re.findall(text)
|
||||
for indent in indents:
|
||||
if margin is None:
|
||||
margin = indent
|
||||
|
||||
# Current line more deeply indented than previous winner:
|
||||
# no change (previous winner is still on top).
|
||||
elif indent.startswith(margin):
|
||||
pass
|
||||
|
||||
# Current line consistent with and no deeper than previous winner:
|
||||
# it's the new winner.
|
||||
elif margin.startswith(indent):
|
||||
margin = indent
|
||||
|
||||
# Find the largest common whitespace between current line and previous
|
||||
# winner.
|
||||
else:
|
||||
for i, (x, y) in enumerate(zip(margin, indent)):
|
||||
if x != y:
|
||||
margin = margin[:i]
|
||||
break
|
||||
|
||||
# sanity check (testing/debugging only)
|
||||
if 0 and margin:
|
||||
for line in text.split("\n"):
|
||||
assert not line or line.startswith(margin), \
|
||||
"line = %r, margin = %r" % (line, margin)
|
||||
|
||||
if margin:
|
||||
text = re.sub(r'(?m)^' + margin, '', text)
|
||||
return text, len(margin)
|
||||
else:
|
||||
return text, 0
|
||||
|
||||
def get_next_batch(self):
|
||||
current_page_start, future_page_start = self._get_next_window()
|
||||
return ''.join(self.full_context[current_page_start: future_page_start]), current_page_start, future_page_start
|
||||
|
||||
def tag_code(self, fn, hint):
|
||||
code = fn
|
||||
_, n_indent = self.dedent(code)
|
||||
indent_reminder = "" if n_indent == 0 else "(Reminder: as you can see, this piece of code has indent made up with {n_indent} whitespace, please preserve them in the OUTPUT.)"
|
||||
brief_reminder = "" if self.file_brief == "" else f"({self.file_basename} abstract: {self.file_brief})"
|
||||
hint_reminder = "" if hint is None else f"(Reminder: do not ignore or modify code such as `{hint}`, provide complete code in the OUTPUT.)"
|
||||
self.llm_kwargs['temperature'] = 0
|
||||
result = predict_no_ui_long_connection(
|
||||
inputs=self.core_prompt.format(
|
||||
LANG=self.language,
|
||||
FILE_BASENAME=self.file_basename,
|
||||
THE_CODE=code,
|
||||
INDENT_REMINDER=indent_reminder,
|
||||
BRIEF_REMINDER=brief_reminder,
|
||||
HINT_REMINDER=hint_reminder
|
||||
),
|
||||
llm_kwargs=self.llm_kwargs,
|
||||
history=[],
|
||||
sys_prompt="",
|
||||
observe_window=[],
|
||||
console_silence=True
|
||||
)
|
||||
|
||||
def get_code_block(reply):
|
||||
import re
|
||||
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
|
||||
matches = re.findall(pattern, reply) # find all code blocks in text
|
||||
if len(matches) == 1:
|
||||
return matches[0].strip('python') # code block
|
||||
return None
|
||||
|
||||
code_block = get_code_block(result)
|
||||
if code_block is not None:
|
||||
code_block = self.sync_and_patch(original=code, revised=code_block)
|
||||
return code_block
|
||||
else:
|
||||
return code
|
||||
|
||||
def get_markdown_block_in_html(self, html):
|
||||
from bs4 import BeautifulSoup
|
||||
soup = BeautifulSoup(html, 'lxml')
|
||||
found_list = soup.find_all("div", class_="markdown-body")
|
||||
if found_list:
|
||||
res = found_list[0]
|
||||
return res.prettify()
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def sync_and_patch(self, original, revised):
|
||||
"""Ensure the number of pre-string empty lines in revised matches those in original."""
|
||||
|
||||
def count_leading_empty_lines(s, reverse=False):
|
||||
"""Count the number of leading empty lines in a string."""
|
||||
lines = s.split('\n')
|
||||
if reverse: lines = list(reversed(lines))
|
||||
count = 0
|
||||
for line in lines:
|
||||
if line.strip() == '':
|
||||
count += 1
|
||||
else:
|
||||
break
|
||||
return count
|
||||
|
||||
original_empty_lines = count_leading_empty_lines(original)
|
||||
revised_empty_lines = count_leading_empty_lines(revised)
|
||||
|
||||
if original_empty_lines > revised_empty_lines:
|
||||
additional_lines = '\n' * (original_empty_lines - revised_empty_lines)
|
||||
revised = additional_lines + revised
|
||||
elif original_empty_lines < revised_empty_lines:
|
||||
lines = revised.split('\n')
|
||||
revised = '\n'.join(lines[revised_empty_lines - original_empty_lines:])
|
||||
|
||||
original_empty_lines = count_leading_empty_lines(original, reverse=True)
|
||||
revised_empty_lines = count_leading_empty_lines(revised, reverse=True)
|
||||
|
||||
if original_empty_lines > revised_empty_lines:
|
||||
additional_lines = '\n' * (original_empty_lines - revised_empty_lines)
|
||||
revised = revised + additional_lines
|
||||
elif original_empty_lines < revised_empty_lines:
|
||||
lines = revised.split('\n')
|
||||
revised = '\n'.join(lines[:-(revised_empty_lines - original_empty_lines)])
|
||||
|
||||
return revised
|
||||
|
||||
def begin_comment_source_code(self, chatbot=None, history=None):
|
||||
# from toolbox import update_ui_latest_msg
|
||||
assert self.path is not None
|
||||
assert '.py' in self.path # must be python source code
|
||||
# write_target = self.path + '.revised.py'
|
||||
|
||||
write_content = ""
|
||||
# with open(self.path + '.revised.py', 'w+', encoding='utf8') as f:
|
||||
while True:
|
||||
try:
|
||||
# yield from update_ui_latest_msg(f"({self.file_basename}) 正在读取下一段代码片段:\n", chatbot=chatbot, history=history, delay=0)
|
||||
next_batch, line_no_start, line_no_end = self.get_next_batch()
|
||||
self.observe_window_update(f"正在处理{self.file_basename} - {line_no_start}/{len(self.full_context)}\n")
|
||||
# yield from update_ui_latest_msg(f"({self.file_basename}) 处理代码片段:\n\n{next_batch}", chatbot=chatbot, history=history, delay=0)
|
||||
|
||||
hint = None
|
||||
MAX_ATTEMPT = 2
|
||||
for attempt in range(MAX_ATTEMPT):
|
||||
result = self.tag_code(next_batch, hint)
|
||||
try:
|
||||
successful, hint = self.verify_successful(next_batch, result)
|
||||
except Exception as e:
|
||||
logger.error('ignored exception:\n' + str(e))
|
||||
break
|
||||
if successful:
|
||||
break
|
||||
if attempt == MAX_ATTEMPT - 1:
|
||||
# cannot deal with this, give up
|
||||
result = next_batch
|
||||
break
|
||||
|
||||
# f.write(result)
|
||||
write_content += result
|
||||
except StopIteration:
|
||||
next_batch, line_no_start, line_no_end = [], -1, -1
|
||||
return None, write_content
|
||||
|
||||
def verify_successful(self, original, revised):
|
||||
""" Determine whether the revised code contains every line that already exists
|
||||
"""
|
||||
from crazy_functions.ast_fns.comment_remove import remove_python_comments
|
||||
original = remove_python_comments(original)
|
||||
original_lines = original.split('\n')
|
||||
revised_lines = revised.split('\n')
|
||||
|
||||
for l in original_lines:
|
||||
l = l.strip()
|
||||
if '\'' in l or '\"' in l: continue # ast sometimes toggle " to '
|
||||
found = False
|
||||
for lt in revised_lines:
|
||||
if l in lt:
|
||||
found = True
|
||||
break
|
||||
if not found:
|
||||
return False, l
|
||||
return True, None
|
||||
45
crazy_functions/agent_fns/python_comment_compare.html
Normal file
45
crazy_functions/agent_fns/python_comment_compare.html
Normal file
@@ -0,0 +1,45 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="zh-CN">
|
||||
<head>
|
||||
<style>ADVANCED_CSS</style>
|
||||
<meta charset="UTF-8">
|
||||
<title>源文件对比</title>
|
||||
<style>
|
||||
body {
|
||||
font-family: Arial, sans-serif;
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
align-items: center;
|
||||
height: 100vh;
|
||||
margin: 0;
|
||||
}
|
||||
.container {
|
||||
display: flex;
|
||||
width: 95%;
|
||||
height: -webkit-fill-available;
|
||||
}
|
||||
.code-container {
|
||||
flex: 1;
|
||||
margin: 0px;
|
||||
padding: 0px;
|
||||
border: 1px solid #ccc;
|
||||
background-color: #f9f9f9;
|
||||
overflow: auto;
|
||||
}
|
||||
pre {
|
||||
white-space: pre-wrap;
|
||||
word-wrap: break-word;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<div class="code-container">
|
||||
REPLACE_CODE_FILE_LEFT
|
||||
</div>
|
||||
<div class="code-container">
|
||||
REPLACE_CODE_FILE_RIGHT
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
29
crazy_functions/agent_fns/watchdog.py
Normal file
29
crazy_functions/agent_fns/watchdog.py
Normal file
@@ -0,0 +1,29 @@
|
||||
import threading, time
|
||||
from loguru import logger
|
||||
|
||||
class WatchDog():
|
||||
def __init__(self, timeout, bark_fn, interval=3, msg="") -> None:
|
||||
self.last_feed = None
|
||||
self.timeout = timeout
|
||||
self.bark_fn = bark_fn
|
||||
self.interval = interval
|
||||
self.msg = msg
|
||||
self.kill_dog = False
|
||||
|
||||
def watch(self):
|
||||
while True:
|
||||
if self.kill_dog: break
|
||||
if time.time() - self.last_feed > self.timeout:
|
||||
if len(self.msg) > 0: logger.info(self.msg)
|
||||
self.bark_fn()
|
||||
break
|
||||
time.sleep(self.interval)
|
||||
|
||||
def begin_watch(self):
|
||||
self.last_feed = time.time()
|
||||
th = threading.Thread(target=self.watch)
|
||||
th.daemon = True
|
||||
th.start()
|
||||
|
||||
def feed(self):
|
||||
self.last_feed = time.time()
|
||||
54
crazy_functions/ast_fns/comment_remove.py
Normal file
54
crazy_functions/ast_fns/comment_remove.py
Normal file
@@ -0,0 +1,54 @@
|
||||
import token
|
||||
import tokenize
|
||||
import copy
|
||||
import io
|
||||
|
||||
|
||||
def remove_python_comments(input_source: str) -> str:
|
||||
source_flag = copy.copy(input_source)
|
||||
source = io.StringIO(input_source)
|
||||
ls = input_source.split('\n')
|
||||
prev_toktype = token.INDENT
|
||||
readline = source.readline
|
||||
|
||||
def get_char_index(lineno, col):
|
||||
# find the index of the char in the source code
|
||||
if lineno == 1:
|
||||
return len('\n'.join(ls[:(lineno-1)])) + col
|
||||
else:
|
||||
return len('\n'.join(ls[:(lineno-1)])) + col + 1
|
||||
|
||||
def replace_char_between(start_lineno, start_col, end_lineno, end_col, source, replace_char, ls):
|
||||
# replace char between start_lineno, start_col and end_lineno, end_col with replace_char, but keep '\n' and ' '
|
||||
b = get_char_index(start_lineno, start_col)
|
||||
e = get_char_index(end_lineno, end_col)
|
||||
for i in range(b, e):
|
||||
if source[i] == '\n':
|
||||
source = source[:i] + '\n' + source[i+1:]
|
||||
elif source[i] == ' ':
|
||||
source = source[:i] + ' ' + source[i+1:]
|
||||
else:
|
||||
source = source[:i] + replace_char + source[i+1:]
|
||||
return source
|
||||
|
||||
tokgen = tokenize.generate_tokens(readline)
|
||||
for toktype, ttext, (slineno, scol), (elineno, ecol), ltext in tokgen:
|
||||
if toktype == token.STRING and (prev_toktype == token.INDENT):
|
||||
source_flag = replace_char_between(slineno, scol, elineno, ecol, source_flag, ' ', ls)
|
||||
elif toktype == token.STRING and (prev_toktype == token.NEWLINE):
|
||||
source_flag = replace_char_between(slineno, scol, elineno, ecol, source_flag, ' ', ls)
|
||||
elif toktype == tokenize.COMMENT:
|
||||
source_flag = replace_char_between(slineno, scol, elineno, ecol, source_flag, ' ', ls)
|
||||
prev_toktype = toktype
|
||||
return source_flag
|
||||
|
||||
|
||||
# 示例使用
|
||||
if __name__ == "__main__":
|
||||
with open("source.py", "r", encoding="utf-8") as f:
|
||||
source_code = f.read()
|
||||
|
||||
cleaned_code = remove_python_comments(source_code)
|
||||
|
||||
with open("cleaned_source.py", "w", encoding="utf-8") as f:
|
||||
f.write(cleaned_code)
|
||||
@@ -1,231 +0,0 @@
|
||||
"""
|
||||
这是什么?
|
||||
这个文件用于函数插件的单元测试
|
||||
运行方法 python crazy_functions/crazy_functions_test.py
|
||||
"""
|
||||
|
||||
# ==============================================================================================================================
|
||||
|
||||
def validate_path():
|
||||
import os, sys
|
||||
dir_name = os.path.dirname(__file__)
|
||||
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
|
||||
os.chdir(root_dir_assume)
|
||||
sys.path.append(root_dir_assume)
|
||||
validate_path() # validate path so you can run from base directory
|
||||
|
||||
# ==============================================================================================================================
|
||||
|
||||
from colorful import *
|
||||
from toolbox import get_conf, ChatBotWithCookies
|
||||
import contextlib
|
||||
import os
|
||||
import sys
|
||||
from functools import wraps
|
||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
|
||||
get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
|
||||
|
||||
llm_kwargs = {
|
||||
'api_key': API_KEY,
|
||||
'llm_model': LLM_MODEL,
|
||||
'top_p':1.0,
|
||||
'max_length': None,
|
||||
'temperature':1.0,
|
||||
}
|
||||
plugin_kwargs = { }
|
||||
chatbot = ChatBotWithCookies(llm_kwargs)
|
||||
history = []
|
||||
system_prompt = "Serve me as a writing and programming assistant."
|
||||
web_port = 1024
|
||||
|
||||
# ==============================================================================================================================
|
||||
|
||||
def silence_stdout(func):
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
_original_stdout = sys.stdout
|
||||
sys.stdout = open(os.devnull, 'w')
|
||||
for q in func(*args, **kwargs):
|
||||
sys.stdout = _original_stdout
|
||||
yield q
|
||||
sys.stdout = open(os.devnull, 'w')
|
||||
sys.stdout.close()
|
||||
sys.stdout = _original_stdout
|
||||
return wrapper
|
||||
|
||||
class CLI_Printer():
|
||||
def __init__(self) -> None:
|
||||
self.pre_buf = ""
|
||||
|
||||
def print(self, buf):
|
||||
bufp = ""
|
||||
for index, chat in enumerate(buf):
|
||||
a, b = chat
|
||||
bufp += sprint亮靛('[Me]:' + a) + '\n'
|
||||
bufp += '[GPT]:' + b
|
||||
if index < len(buf)-1:
|
||||
bufp += '\n'
|
||||
|
||||
if self.pre_buf!="" and bufp.startswith(self.pre_buf):
|
||||
print(bufp[len(self.pre_buf):], end='')
|
||||
else:
|
||||
print('\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n'+bufp, end='')
|
||||
self.pre_buf = bufp
|
||||
return
|
||||
|
||||
cli_printer = CLI_Printer()
|
||||
# ==============================================================================================================================
|
||||
def test_解析一个Python项目():
|
||||
from crazy_functions.解析项目源代码 import 解析一个Python项目
|
||||
txt = "crazy_functions/test_project/python/dqn"
|
||||
for cookies, cb, hist, msg in 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_解析一个Cpp项目():
|
||||
from crazy_functions.解析项目源代码 import 解析一个C项目
|
||||
txt = "crazy_functions/test_project/cpp/cppipc"
|
||||
for cookies, cb, hist, msg in 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_Latex英文润色():
|
||||
from crazy_functions.Latex全文润色 import Latex英文润色
|
||||
txt = "crazy_functions/test_project/latex/attention"
|
||||
for cookies, cb, hist, msg in Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_Markdown中译英():
|
||||
from crazy_functions.批量Markdown翻译 import Markdown中译英
|
||||
txt = "README.md"
|
||||
for cookies, cb, hist, msg in Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_批量翻译PDF文档():
|
||||
from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档
|
||||
txt = "crazy_functions/test_project/pdf_and_word"
|
||||
for cookies, cb, hist, msg in 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_谷歌检索小助手():
|
||||
from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
|
||||
txt = "https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG="
|
||||
for cookies, cb, hist, msg in 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_总结word文档():
|
||||
from crazy_functions.总结word文档 import 总结word文档
|
||||
txt = "crazy_functions/test_project/pdf_and_word"
|
||||
for cookies, cb, hist, msg in 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_下载arxiv论文并翻译摘要():
|
||||
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
|
||||
txt = "1812.10695"
|
||||
for cookies, cb, hist, msg in 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_联网回答问题():
|
||||
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
||||
# txt = "谁是应急食品?"
|
||||
# >> '根据以上搜索结果可以得知,应急食品是“原神”游戏中的角色派蒙的外号。'
|
||||
# txt = "道路千万条,安全第一条。后面两句是?"
|
||||
# >> '行车不规范,亲人两行泪。'
|
||||
# txt = "You should have gone for the head. What does that mean?"
|
||||
# >> The phrase "You should have gone for the head" is a quote from the Marvel movies, Avengers: Infinity War and Avengers: Endgame. It was spoken by the character Thanos in Infinity War and by Thor in Endgame.
|
||||
txt = "AutoGPT是什么?"
|
||||
for cookies, cb, hist, msg in 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print("当前问答:", cb[-1][-1].replace("\n"," "))
|
||||
for i, it in enumerate(cb): print亮蓝(it[0]); print亮黄(it[1])
|
||||
|
||||
def test_解析ipynb文件():
|
||||
from crazy_functions.解析JupyterNotebook import 解析ipynb文件
|
||||
txt = "crazy_functions/test_samples"
|
||||
for cookies, cb, hist, msg in 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
|
||||
def test_数学动画生成manim():
|
||||
from crazy_functions.数学动画生成manim import 动画生成
|
||||
txt = "A ball split into 2, and then split into 4, and finally split into 8."
|
||||
for cookies, cb, hist, msg in 动画生成(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
|
||||
|
||||
def test_Markdown多语言():
|
||||
from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
|
||||
txt = "README.md"
|
||||
history = []
|
||||
for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]:
|
||||
plugin_kwargs = {"advanced_arg": lang}
|
||||
for cookies, cb, hist, msg in Markdown翻译指定语言(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
print(cb)
|
||||
|
||||
def test_Langchain知识库():
|
||||
from crazy_functions.Langchain知识库 import 知识库问答
|
||||
txt = "./"
|
||||
chatbot = ChatBotWithCookies(llm_kwargs)
|
||||
for cookies, cb, hist, msg in silence_stdout(知识库问答)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
cli_printer.print(cb) # print(cb)
|
||||
|
||||
chatbot = ChatBotWithCookies(cookies)
|
||||
from crazy_functions.Langchain知识库 import 读取知识库作答
|
||||
txt = "What is the installation method?"
|
||||
for cookies, cb, hist, msg in silence_stdout(读取知识库作答)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
cli_printer.print(cb) # print(cb)
|
||||
|
||||
def test_Langchain知识库读取():
|
||||
from crazy_functions.Langchain知识库 import 读取知识库作答
|
||||
txt = "远程云服务器部署?"
|
||||
for cookies, cb, hist, msg in silence_stdout(读取知识库作答)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
cli_printer.print(cb) # print(cb)
|
||||
|
||||
def test_Latex():
|
||||
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比, Latex翻译中文并重新编译PDF
|
||||
|
||||
# txt = r"https://arxiv.org/abs/1706.03762"
|
||||
# txt = r"https://arxiv.org/abs/1902.03185"
|
||||
# txt = r"https://arxiv.org/abs/2305.18290"
|
||||
# txt = r"https://arxiv.org/abs/2305.17608"
|
||||
# txt = r"https://arxiv.org/abs/2211.16068" # ACE
|
||||
# txt = r"C:\Users\x\arxiv_cache\2211.16068\workfolder" # ACE
|
||||
# txt = r"https://arxiv.org/abs/2002.09253"
|
||||
# txt = r"https://arxiv.org/abs/2306.07831"
|
||||
# txt = r"https://arxiv.org/abs/2212.10156"
|
||||
# txt = r"https://arxiv.org/abs/2211.11559"
|
||||
# txt = r"https://arxiv.org/abs/2303.08774"
|
||||
txt = r"https://arxiv.org/abs/2303.12712"
|
||||
# txt = r"C:\Users\fuqingxu\arxiv_cache\2303.12712\workfolder"
|
||||
|
||||
|
||||
for cookies, cb, hist, msg in (Latex翻译中文并重新编译PDF)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
cli_printer.print(cb) # print(cb)
|
||||
|
||||
|
||||
|
||||
# txt = "2302.02948.tar"
|
||||
# print(txt)
|
||||
# main_tex, work_folder = Latex预处理(txt)
|
||||
# print('main tex:', main_tex)
|
||||
# res = 编译Latex(main_tex, work_folder)
|
||||
# # for cookies, cb, hist, msg in silence_stdout(编译Latex)(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||
# cli_printer.print(cb) # print(cb)
|
||||
|
||||
|
||||
|
||||
# test_解析一个Python项目()
|
||||
# test_Latex英文润色()
|
||||
# test_Markdown中译英()
|
||||
# test_批量翻译PDF文档()
|
||||
# test_谷歌检索小助手()
|
||||
# test_总结word文档()
|
||||
# test_下载arxiv论文并翻译摘要()
|
||||
# test_解析一个Cpp项目()
|
||||
# test_联网回答问题()
|
||||
# test_解析ipynb文件()
|
||||
# test_数学动画生成manim()
|
||||
# test_Langchain知识库()
|
||||
# test_Langchain知识库读取()
|
||||
if __name__ == "__main__":
|
||||
test_Latex()
|
||||
input("程序完成,回车退出。")
|
||||
print("退出。")
|
||||
@@ -1,25 +1,41 @@
|
||||
from toolbox import update_ui, get_conf, trimmed_format_exc
|
||||
import os
|
||||
import threading
|
||||
from loguru import logger
|
||||
from shared_utils.char_visual_effect import scrolling_visual_effect
|
||||
from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token, Singleton
|
||||
|
||||
def input_clipping(inputs, history, max_token_limit):
|
||||
def input_clipping(inputs, history, max_token_limit, return_clip_flags=False):
|
||||
"""
|
||||
当输入文本 + 历史文本超出最大限制时,采取措施丢弃一部分文本。
|
||||
输入:
|
||||
- inputs 本次请求
|
||||
- history 历史上下文
|
||||
- max_token_limit 最大token限制
|
||||
输出:
|
||||
- inputs 本次请求(经过clip)
|
||||
- history 历史上下文(经过clip)
|
||||
"""
|
||||
import numpy as np
|
||||
from request_llm.bridge_all import model_info
|
||||
from request_llms.bridge_all import model_info
|
||||
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||
|
||||
|
||||
mode = 'input-and-history'
|
||||
# 当 输入部分的token占比 小于 全文的一半时,只裁剪历史
|
||||
input_token_num = get_token_num(inputs)
|
||||
if input_token_num < max_token_limit//2:
|
||||
original_input_len = len(inputs)
|
||||
if input_token_num < max_token_limit//2:
|
||||
mode = 'only-history'
|
||||
max_token_limit = max_token_limit - input_token_num
|
||||
|
||||
everything = [inputs] if mode == 'input-and-history' else ['']
|
||||
everything.extend(history)
|
||||
n_token = get_token_num('\n'.join(everything))
|
||||
full_token_num = n_token = get_token_num('\n'.join(everything))
|
||||
everything_token = [get_token_num(e) for e in everything]
|
||||
everything_token_num = sum(everything_token)
|
||||
delta = max(everything_token) // 16 # 截断时的颗粒度
|
||||
|
||||
|
||||
while n_token > max_token_limit:
|
||||
where = np.argmax(everything_token)
|
||||
encoded = enc.encode(everything[where], disallowed_special=())
|
||||
@@ -30,15 +46,29 @@ def input_clipping(inputs, history, max_token_limit):
|
||||
|
||||
if mode == 'input-and-history':
|
||||
inputs = everything[0]
|
||||
full_token_num = everything_token_num
|
||||
else:
|
||||
pass
|
||||
full_token_num = everything_token_num + input_token_num
|
||||
|
||||
history = everything[1:]
|
||||
return inputs, history
|
||||
|
||||
flags = {
|
||||
"mode": mode,
|
||||
"original_input_token_num": input_token_num,
|
||||
"original_full_token_num": full_token_num,
|
||||
"original_input_len": original_input_len,
|
||||
"clipped_input_len": len(inputs),
|
||||
}
|
||||
|
||||
if not return_clip_flags:
|
||||
return inputs, history
|
||||
else:
|
||||
return inputs, history, flags
|
||||
|
||||
def request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs, inputs_show_user, llm_kwargs,
|
||||
inputs, inputs_show_user, llm_kwargs,
|
||||
chatbot, history, sys_prompt, refresh_interval=0.2,
|
||||
handle_token_exceed=True,
|
||||
handle_token_exceed=True,
|
||||
retry_times_at_unknown_error=2,
|
||||
):
|
||||
"""
|
||||
@@ -61,18 +91,21 @@ def request_gpt_model_in_new_thread_with_ui_alive(
|
||||
"""
|
||||
import time
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from request_llm.bridge_all import predict_no_ui_long_connection
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
# 用户反馈
|
||||
chatbot.append([inputs_show_user, ""])
|
||||
yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
|
||||
executor = ThreadPoolExecutor(max_workers=16)
|
||||
mutable = ["", time.time(), ""]
|
||||
# 看门狗耐心
|
||||
watch_dog_patience = 5
|
||||
# 请求任务
|
||||
def _req_gpt(inputs, history, sys_prompt):
|
||||
retry_op = retry_times_at_unknown_error
|
||||
exceeded_cnt = 0
|
||||
while True:
|
||||
# watchdog error
|
||||
if len(mutable) >= 2 and (time.time()-mutable[1]) > 5:
|
||||
if len(mutable) >= 2 and (time.time()-mutable[1]) > watch_dog_patience:
|
||||
raise RuntimeError("检测到程序终止。")
|
||||
try:
|
||||
# 【第一种情况】:顺利完成
|
||||
@@ -87,7 +120,7 @@ def request_gpt_model_in_new_thread_with_ui_alive(
|
||||
# 【选择处理】 尝试计算比例,尽可能多地保留文本
|
||||
from toolbox import get_reduce_token_percent
|
||||
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
|
||||
MAX_TOKEN = 4096
|
||||
MAX_TOKEN = get_max_token(llm_kwargs)
|
||||
EXCEED_ALLO = 512 + 512 * exceeded_cnt
|
||||
inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
|
||||
mutable[0] += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n'
|
||||
@@ -100,7 +133,7 @@ def request_gpt_model_in_new_thread_with_ui_alive(
|
||||
except:
|
||||
# 【第三种情况】:其他错误:重试几次
|
||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||
print(tb_str)
|
||||
logger.error(tb_str)
|
||||
mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
||||
if retry_op > 0:
|
||||
retry_op -= 1
|
||||
@@ -130,11 +163,31 @@ def request_gpt_model_in_new_thread_with_ui_alive(
|
||||
yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息
|
||||
return final_result
|
||||
|
||||
def can_multi_process(llm) -> bool:
|
||||
from request_llms.bridge_all import model_info
|
||||
|
||||
def default_condition(llm) -> bool:
|
||||
# legacy condition
|
||||
if llm.startswith('gpt-'): return True
|
||||
if llm.startswith('chatgpt-'): return True
|
||||
if llm.startswith('api2d-'): return True
|
||||
if llm.startswith('azure-'): return True
|
||||
if llm.startswith('spark'): return True
|
||||
if llm.startswith('zhipuai') or llm.startswith('glm-'): return True
|
||||
return False
|
||||
|
||||
if llm in model_info:
|
||||
if 'can_multi_thread' in model_info[llm]:
|
||||
return model_info[llm]['can_multi_thread']
|
||||
else:
|
||||
return default_condition(llm)
|
||||
else:
|
||||
return default_condition(llm)
|
||||
|
||||
def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array, inputs_show_user_array, llm_kwargs,
|
||||
chatbot, history_array, sys_prompt_array,
|
||||
refresh_interval=0.2, max_workers=-1, scroller_max_len=30,
|
||||
inputs_array, inputs_show_user_array, llm_kwargs,
|
||||
chatbot, history_array, sys_prompt_array,
|
||||
refresh_interval=0.2, max_workers=-1, scroller_max_len=75,
|
||||
handle_token_exceed=True, show_user_at_complete=False,
|
||||
retry_times_at_unknown_error=2,
|
||||
):
|
||||
@@ -167,17 +220,17 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
"""
|
||||
import time, random
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from request_llm.bridge_all import predict_no_ui_long_connection
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
assert len(inputs_array) == len(history_array)
|
||||
assert len(inputs_array) == len(sys_prompt_array)
|
||||
if max_workers == -1: # 读取配置文件
|
||||
try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
|
||||
try: max_workers = get_conf('DEFAULT_WORKER_NUM')
|
||||
except: max_workers = 8
|
||||
if max_workers <= 0: max_workers = 3
|
||||
# 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
|
||||
if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
|
||||
if not can_multi_process(llm_kwargs['llm_model']):
|
||||
max_workers = 1
|
||||
|
||||
|
||||
executor = ThreadPoolExecutor(max_workers=max_workers)
|
||||
n_frag = len(inputs_array)
|
||||
# 用户反馈
|
||||
@@ -186,33 +239,35 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
# 跨线程传递
|
||||
mutable = [["", time.time(), "等待中"] for _ in range(n_frag)]
|
||||
|
||||
# 看门狗耐心
|
||||
watch_dog_patience = 5
|
||||
|
||||
# 子线程任务
|
||||
def _req_gpt(index, inputs, history, sys_prompt):
|
||||
gpt_say = ""
|
||||
retry_op = retry_times_at_unknown_error
|
||||
exceeded_cnt = 0
|
||||
mutable[index][2] = "执行中"
|
||||
detect_timeout = lambda: len(mutable[index]) >= 2 and (time.time()-mutable[index][1]) > watch_dog_patience
|
||||
while True:
|
||||
# watchdog error
|
||||
if len(mutable[index]) >= 2 and (time.time()-mutable[index][1]) > 5:
|
||||
raise RuntimeError("检测到程序终止。")
|
||||
if detect_timeout(): raise RuntimeError("检测到程序终止。")
|
||||
try:
|
||||
# 【第一种情况】:顺利完成
|
||||
# time.sleep(10); raise RuntimeError("测试")
|
||||
gpt_say = predict_no_ui_long_connection(
|
||||
inputs=inputs, llm_kwargs=llm_kwargs, history=history,
|
||||
sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True
|
||||
inputs=inputs, llm_kwargs=llm_kwargs, history=history,
|
||||
sys_prompt=sys_prompt, observe_window=mutable[index], console_silence=True
|
||||
)
|
||||
mutable[index][2] = "已成功"
|
||||
return gpt_say
|
||||
except ConnectionAbortedError as token_exceeded_error:
|
||||
# 【第二种情况】:Token溢出,
|
||||
# 【第二种情况】:Token溢出
|
||||
if handle_token_exceed:
|
||||
exceeded_cnt += 1
|
||||
# 【选择处理】 尝试计算比例,尽可能多地保留文本
|
||||
from toolbox import get_reduce_token_percent
|
||||
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
|
||||
MAX_TOKEN = 4096
|
||||
MAX_TOKEN = get_max_token(llm_kwargs)
|
||||
EXCEED_ALLO = 512 + 512 * exceeded_cnt
|
||||
inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
|
||||
gpt_say += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n'
|
||||
@@ -227,11 +282,12 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
return gpt_say # 放弃
|
||||
except:
|
||||
# 【第三种情况】:其他错误
|
||||
if detect_timeout(): raise RuntimeError("检测到程序终止。")
|
||||
tb_str = '```\n' + trimmed_format_exc() + '```'
|
||||
print(tb_str)
|
||||
logger.error(tb_str)
|
||||
gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
|
||||
if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
|
||||
if retry_op > 0:
|
||||
if retry_op > 0:
|
||||
retry_op -= 1
|
||||
wait = random.randint(5, 20)
|
||||
if ("Rate limit reached" in tb_str) or ("Too Many Requests" in tb_str):
|
||||
@@ -243,6 +299,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
for i in range(wait):
|
||||
mutable[index][2] = f"{fail_info}等待重试 {wait-i}"; time.sleep(1)
|
||||
# 开始重试
|
||||
if detect_timeout(): raise RuntimeError("检测到程序终止。")
|
||||
mutable[index][2] = f"重试中 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}"
|
||||
continue # 返回重试
|
||||
else:
|
||||
@@ -255,6 +312,8 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
futures = [executor.submit(_req_gpt, index, inputs, history, sys_prompt) for index, inputs, history, sys_prompt in zip(
|
||||
range(len(inputs_array)), inputs_array, history_array, sys_prompt_array)]
|
||||
cnt = 0
|
||||
|
||||
|
||||
while True:
|
||||
# yield一次以刷新前端页面
|
||||
time.sleep(refresh_interval)
|
||||
@@ -267,13 +326,11 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
mutable[thread_index][1] = time.time()
|
||||
# 在前端打印些好玩的东西
|
||||
for thread_index, _ in enumerate(worker_done):
|
||||
print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\
|
||||
replace('\n', '').replace('```', '...').replace(
|
||||
' ', '.').replace('<br/>', '.....').replace('$', '.')+"`... ]"
|
||||
print_something_really_funny = f"[ ...`{scrolling_visual_effect(mutable[thread_index][0], scroller_max_len)}`... ]"
|
||||
observe_win.append(print_something_really_funny)
|
||||
# 在前端打印些好玩的东西
|
||||
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
|
||||
if not done else f'`{mutable[thread_index][2]}`\n\n'
|
||||
stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n'
|
||||
if not done else f'`{mutable[thread_index][2]}`\n\n'
|
||||
for thread_index, done, obs in zip(range(len(worker_done)), worker_done, observe_win)])
|
||||
# 在前端打印些好玩的东西
|
||||
chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))]
|
||||
@@ -287,106 +344,17 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
for inputs_show_user, f in zip(inputs_show_user_array, futures):
|
||||
gpt_res = f.result()
|
||||
gpt_response_collection.extend([inputs_show_user, gpt_res])
|
||||
|
||||
|
||||
# 是否在结束时,在界面上显示结果
|
||||
if show_user_at_complete:
|
||||
for inputs_show_user, f in zip(inputs_show_user_array, futures):
|
||||
gpt_res = f.result()
|
||||
chatbot.append([inputs_show_user, gpt_res])
|
||||
yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面
|
||||
time.sleep(0.3)
|
||||
time.sleep(0.5)
|
||||
return gpt_response_collection
|
||||
|
||||
|
||||
def breakdown_txt_to_satisfy_token_limit(txt, get_token_fn, limit):
|
||||
def cut(txt_tocut, must_break_at_empty_line): # 递归
|
||||
if get_token_fn(txt_tocut) <= limit:
|
||||
return [txt_tocut]
|
||||
else:
|
||||
lines = txt_tocut.split('\n')
|
||||
estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
|
||||
estimated_line_cut = int(estimated_line_cut)
|
||||
for cnt in reversed(range(estimated_line_cut)):
|
||||
if must_break_at_empty_line:
|
||||
if lines[cnt] != "":
|
||||
continue
|
||||
print(cnt)
|
||||
prev = "\n".join(lines[:cnt])
|
||||
post = "\n".join(lines[cnt:])
|
||||
if get_token_fn(prev) < limit:
|
||||
break
|
||||
if cnt == 0:
|
||||
raise RuntimeError("存在一行极长的文本!")
|
||||
# print(len(post))
|
||||
# 列表递归接龙
|
||||
result = [prev]
|
||||
result.extend(cut(post, must_break_at_empty_line))
|
||||
return result
|
||||
try:
|
||||
return cut(txt, must_break_at_empty_line=True)
|
||||
except RuntimeError:
|
||||
return cut(txt, must_break_at_empty_line=False)
|
||||
|
||||
|
||||
def force_breakdown(txt, limit, get_token_fn):
|
||||
"""
|
||||
当无法用标点、空行分割时,我们用最暴力的方法切割
|
||||
"""
|
||||
for i in reversed(range(len(txt))):
|
||||
if get_token_fn(txt[:i]) < limit:
|
||||
return txt[:i], txt[i:]
|
||||
return "Tiktoken未知错误", "Tiktoken未知错误"
|
||||
|
||||
def breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn, limit):
|
||||
# 递归
|
||||
def cut(txt_tocut, must_break_at_empty_line, break_anyway=False):
|
||||
if get_token_fn(txt_tocut) <= limit:
|
||||
return [txt_tocut]
|
||||
else:
|
||||
lines = txt_tocut.split('\n')
|
||||
estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines)
|
||||
estimated_line_cut = int(estimated_line_cut)
|
||||
cnt = 0
|
||||
for cnt in reversed(range(estimated_line_cut)):
|
||||
if must_break_at_empty_line:
|
||||
if lines[cnt] != "":
|
||||
continue
|
||||
prev = "\n".join(lines[:cnt])
|
||||
post = "\n".join(lines[cnt:])
|
||||
if get_token_fn(prev) < limit:
|
||||
break
|
||||
if cnt == 0:
|
||||
if break_anyway:
|
||||
prev, post = force_breakdown(txt_tocut, limit, get_token_fn)
|
||||
else:
|
||||
raise RuntimeError(f"存在一行极长的文本!{txt_tocut}")
|
||||
# print(len(post))
|
||||
# 列表递归接龙
|
||||
result = [prev]
|
||||
result.extend(cut(post, must_break_at_empty_line, break_anyway=break_anyway))
|
||||
return result
|
||||
try:
|
||||
# 第1次尝试,将双空行(\n\n)作为切分点
|
||||
return cut(txt, must_break_at_empty_line=True)
|
||||
except RuntimeError:
|
||||
try:
|
||||
# 第2次尝试,将单空行(\n)作为切分点
|
||||
return cut(txt, must_break_at_empty_line=False)
|
||||
except RuntimeError:
|
||||
try:
|
||||
# 第3次尝试,将英文句号(.)作为切分点
|
||||
res = cut(txt.replace('.', '。\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在
|
||||
return [r.replace('。\n', '.') for r in res]
|
||||
except RuntimeError as e:
|
||||
try:
|
||||
# 第4次尝试,将中文句号(。)作为切分点
|
||||
res = cut(txt.replace('。', '。。\n'), must_break_at_empty_line=False)
|
||||
return [r.replace('。。\n', '。') for r in res]
|
||||
except RuntimeError as e:
|
||||
# 第5次尝试,没办法了,随便切一下敷衍吧
|
||||
return cut(txt, must_break_at_empty_line=False, break_anyway=True)
|
||||
|
||||
|
||||
|
||||
def read_and_clean_pdf_text(fp):
|
||||
"""
|
||||
@@ -411,7 +379,7 @@ def read_and_clean_pdf_text(fp):
|
||||
import fitz, copy
|
||||
import re
|
||||
import numpy as np
|
||||
from colorful import print亮黄, print亮绿
|
||||
# from shared_utils.colorful import print亮黄, print亮绿
|
||||
fc = 0 # Index 0 文本
|
||||
fs = 1 # Index 1 字体
|
||||
fb = 2 # Index 2 框框
|
||||
@@ -421,12 +389,12 @@ def read_and_clean_pdf_text(fp):
|
||||
"""
|
||||
提取文本块主字体
|
||||
"""
|
||||
fsize_statiscs = {}
|
||||
fsize_statistics = {}
|
||||
for wtf in l['spans']:
|
||||
if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0
|
||||
fsize_statiscs[wtf['size']] += len(wtf['text'])
|
||||
return max(fsize_statiscs, key=fsize_statiscs.get)
|
||||
|
||||
if wtf['size'] not in fsize_statistics: fsize_statistics[wtf['size']] = 0
|
||||
fsize_statistics[wtf['size']] += len(wtf['text'])
|
||||
return max(fsize_statistics, key=fsize_statistics.get)
|
||||
|
||||
def ffsize_same(a,b):
|
||||
"""
|
||||
提取字体大小是否近似相等
|
||||
@@ -462,21 +430,23 @@ def read_and_clean_pdf_text(fp):
|
||||
if index == 0:
|
||||
page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace(
|
||||
'- ', '') for t in text_areas['blocks'] if 'lines' in t]
|
||||
|
||||
############################## <第 2 步,获取正文主字体> ##################################
|
||||
fsize_statiscs = {}
|
||||
for span in meta_span:
|
||||
if span[1] not in fsize_statiscs: fsize_statiscs[span[1]] = 0
|
||||
fsize_statiscs[span[1]] += span[2]
|
||||
main_fsize = max(fsize_statiscs, key=fsize_statiscs.get)
|
||||
if REMOVE_FOOT_NOTE:
|
||||
give_up_fize_threshold = main_fsize * REMOVE_FOOT_FFSIZE_PERCENT
|
||||
|
||||
############################## <第 2 步,获取正文主字体> ##################################
|
||||
try:
|
||||
fsize_statistics = {}
|
||||
for span in meta_span:
|
||||
if span[1] not in fsize_statistics: fsize_statistics[span[1]] = 0
|
||||
fsize_statistics[span[1]] += span[2]
|
||||
main_fsize = max(fsize_statistics, key=fsize_statistics.get)
|
||||
if REMOVE_FOOT_NOTE:
|
||||
give_up_fize_threshold = main_fsize * REMOVE_FOOT_FFSIZE_PERCENT
|
||||
except:
|
||||
raise RuntimeError(f'抱歉, 我们暂时无法解析此PDF文档: {fp}。')
|
||||
############################## <第 3 步,切分和重新整合> ##################################
|
||||
mega_sec = []
|
||||
sec = []
|
||||
for index, line in enumerate(meta_line):
|
||||
if index == 0:
|
||||
if index == 0:
|
||||
sec.append(line[fc])
|
||||
continue
|
||||
if REMOVE_FOOT_NOTE:
|
||||
@@ -537,6 +507,9 @@ def read_and_clean_pdf_text(fp):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
# 对于某些PDF会有第一个段落就以小写字母开头,为了避免索引错误将其更改为大写
|
||||
if starts_with_lowercase_word(meta_txt[0]):
|
||||
meta_txt[0] = meta_txt[0].capitalize()
|
||||
for _ in range(100):
|
||||
for index, block_txt in enumerate(meta_txt):
|
||||
if starts_with_lowercase_word(block_txt):
|
||||
@@ -570,12 +543,12 @@ def get_files_from_everything(txt, type): # type='.md'
|
||||
"""
|
||||
这个函数是用来获取指定目录下所有指定类型(如.md)的文件,并且对于网络上的文件,也可以获取它。
|
||||
下面是对每个参数和返回值的说明:
|
||||
参数
|
||||
- txt: 路径或网址,表示要搜索的文件或者文件夹路径或网络上的文件。
|
||||
参数
|
||||
- txt: 路径或网址,表示要搜索的文件或者文件夹路径或网络上的文件。
|
||||
- type: 字符串,表示要搜索的文件类型。默认是.md。
|
||||
返回值
|
||||
- success: 布尔值,表示函数是否成功执行。
|
||||
- file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。
|
||||
返回值
|
||||
- success: 布尔值,表示函数是否成功执行。
|
||||
- file_manifest: 文件路径列表,里面包含以指定类型为后缀名的所有文件的绝对路径。
|
||||
- project_folder: 字符串,表示文件所在的文件夹路径。如果是网络上的文件,就是临时文件夹的路径。
|
||||
该函数详细注释已添加,请确认是否满足您的需要。
|
||||
"""
|
||||
@@ -586,11 +559,16 @@ def get_files_from_everything(txt, type): # type='.md'
|
||||
# 网络的远程文件
|
||||
import requests
|
||||
from toolbox import get_conf
|
||||
proxies, = get_conf('proxies')
|
||||
r = requests.get(txt, proxies=proxies)
|
||||
with open('./gpt_log/temp'+type, 'wb+') as f: f.write(r.content)
|
||||
project_folder = './gpt_log/'
|
||||
file_manifest = ['./gpt_log/temp'+type]
|
||||
from toolbox import get_log_folder, gen_time_str
|
||||
proxies = get_conf('proxies')
|
||||
try:
|
||||
r = requests.get(txt, proxies=proxies)
|
||||
except:
|
||||
raise ConnectionRefusedError(f"无法下载资源{txt},请检查。")
|
||||
path = os.path.join(get_log_folder(plugin_name='web_download'), gen_time_str()+type)
|
||||
with open(path, 'wb+') as f: f.write(r.content)
|
||||
project_folder = get_log_folder(plugin_name='web_download')
|
||||
file_manifest = [path]
|
||||
elif txt.endswith(type):
|
||||
# 直接给定文件
|
||||
file_manifest = [txt]
|
||||
@@ -610,139 +588,64 @@ def get_files_from_everything(txt, type): # type='.md'
|
||||
|
||||
|
||||
|
||||
|
||||
def Singleton(cls):
|
||||
_instance = {}
|
||||
|
||||
def _singleton(*args, **kargs):
|
||||
if cls not in _instance:
|
||||
_instance[cls] = cls(*args, **kargs)
|
||||
return _instance[cls]
|
||||
|
||||
return _singleton
|
||||
|
||||
|
||||
@Singleton
|
||||
class knowledge_archive_interface():
|
||||
def __init__(self) -> None:
|
||||
class nougat_interface():
|
||||
def __init__(self):
|
||||
self.threadLock = threading.Lock()
|
||||
self.current_id = ""
|
||||
self.kai_path = None
|
||||
self.qa_handle = None
|
||||
self.text2vec_large_chinese = None
|
||||
|
||||
def get_chinese_text2vec(self):
|
||||
if self.text2vec_large_chinese is None:
|
||||
# < -------------------预热文本向量化模组--------------- >
|
||||
from toolbox import ProxyNetworkActivate
|
||||
print('Checking Text2vec ...')
|
||||
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
|
||||
with ProxyNetworkActivate(): # 临时地激活代理网络
|
||||
self.text2vec_large_chinese = HuggingFaceEmbeddings(model_name="GanymedeNil/text2vec-large-chinese")
|
||||
|
||||
return self.text2vec_large_chinese
|
||||
def nougat_with_timeout(self, command, cwd, timeout=3600):
|
||||
import subprocess
|
||||
from toolbox import ProxyNetworkActivate
|
||||
logger.info(f'正在执行命令 {command}')
|
||||
with ProxyNetworkActivate("Nougat_Download"):
|
||||
process = subprocess.Popen(command, shell=False, cwd=cwd, env=os.environ)
|
||||
try:
|
||||
stdout, stderr = process.communicate(timeout=timeout)
|
||||
except subprocess.TimeoutExpired:
|
||||
process.kill()
|
||||
stdout, stderr = process.communicate()
|
||||
logger.error("Process timed out!")
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def feed_archive(self, file_manifest, id="default"):
|
||||
def NOUGAT_parse_pdf(self, fp, chatbot, history):
|
||||
from toolbox import update_ui_latest_msg
|
||||
|
||||
yield from update_ui_latest_msg("正在解析论文, 请稍候。进度:正在排队, 等待线程锁...",
|
||||
chatbot=chatbot, history=history, delay=0)
|
||||
self.threadLock.acquire()
|
||||
# import uuid
|
||||
self.current_id = id
|
||||
from zh_langchain import construct_vector_store
|
||||
self.qa_handle, self.kai_path = construct_vector_store(
|
||||
vs_id=self.current_id,
|
||||
files=file_manifest,
|
||||
sentence_size=100,
|
||||
history=[],
|
||||
one_conent="",
|
||||
one_content_segmentation="",
|
||||
text2vec = self.get_chinese_text2vec(),
|
||||
)
|
||||
import glob, threading, os
|
||||
from toolbox import get_log_folder, gen_time_str
|
||||
dst = os.path.join(get_log_folder(plugin_name='nougat'), gen_time_str())
|
||||
os.makedirs(dst)
|
||||
|
||||
yield from update_ui_latest_msg("正在解析论文, 请稍候。进度:正在加载NOUGAT... (提示:首次运行需要花费较长时间下载NOUGAT参数)",
|
||||
chatbot=chatbot, history=history, delay=0)
|
||||
command = ['nougat', '--out', os.path.abspath(dst), os.path.abspath(fp)]
|
||||
self.nougat_with_timeout(command, cwd=os.getcwd(), timeout=3600)
|
||||
res = glob.glob(os.path.join(dst,'*.mmd'))
|
||||
if len(res) == 0:
|
||||
self.threadLock.release()
|
||||
raise RuntimeError("Nougat解析论文失败。")
|
||||
self.threadLock.release()
|
||||
return res[0]
|
||||
|
||||
def get_current_archive_id(self):
|
||||
return self.current_id
|
||||
|
||||
def get_loaded_file(self):
|
||||
return self.qa_handle.get_loaded_file()
|
||||
|
||||
def answer_with_archive_by_id(self, txt, id):
|
||||
self.threadLock.acquire()
|
||||
if not self.current_id == id:
|
||||
self.current_id = id
|
||||
from zh_langchain import construct_vector_store
|
||||
self.qa_handle, self.kai_path = construct_vector_store(
|
||||
vs_id=self.current_id,
|
||||
files=[],
|
||||
sentence_size=100,
|
||||
history=[],
|
||||
one_conent="",
|
||||
one_content_segmentation="",
|
||||
text2vec = self.get_chinese_text2vec(),
|
||||
)
|
||||
VECTOR_SEARCH_SCORE_THRESHOLD = 0
|
||||
VECTOR_SEARCH_TOP_K = 4
|
||||
CHUNK_SIZE = 512
|
||||
resp, prompt = self.qa_handle.get_knowledge_based_conent_test(
|
||||
query = txt,
|
||||
vs_path = self.kai_path,
|
||||
score_threshold=VECTOR_SEARCH_SCORE_THRESHOLD,
|
||||
vector_search_top_k=VECTOR_SEARCH_TOP_K,
|
||||
chunk_conent=True,
|
||||
chunk_size=CHUNK_SIZE,
|
||||
text2vec = self.get_chinese_text2vec(),
|
||||
)
|
||||
self.threadLock.release()
|
||||
return resp, prompt
|
||||
|
||||
def try_install_deps(deps):
|
||||
|
||||
def try_install_deps(deps, reload_m=[]):
|
||||
import subprocess, sys, importlib
|
||||
for dep in deps:
|
||||
import subprocess, sys
|
||||
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '--user', dep])
|
||||
import site
|
||||
importlib.reload(site)
|
||||
for m in reload_m:
|
||||
importlib.reload(__import__(m))
|
||||
|
||||
|
||||
class construct_html():
|
||||
def __init__(self) -> None:
|
||||
self.css = """
|
||||
.row {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.column {
|
||||
flex: 1;
|
||||
padding: 10px;
|
||||
}
|
||||
|
||||
.table-header {
|
||||
font-weight: bold;
|
||||
border-bottom: 1px solid black;
|
||||
}
|
||||
|
||||
.table-row {
|
||||
border-bottom: 1px solid lightgray;
|
||||
}
|
||||
|
||||
.table-cell {
|
||||
padding: 5px;
|
||||
}
|
||||
"""
|
||||
self.html_string = f'<!DOCTYPE html><head><meta charset="utf-8"><title>翻译结果</title><style>{self.css}</style></head>'
|
||||
|
||||
|
||||
def add_row(self, a, b):
|
||||
tmp = """
|
||||
<div class="row table-row">
|
||||
<div class="column table-cell">REPLACE_A</div>
|
||||
<div class="column table-cell">REPLACE_B</div>
|
||||
</div>
|
||||
"""
|
||||
from toolbox import markdown_convertion
|
||||
tmp = tmp.replace('REPLACE_A', markdown_convertion(a))
|
||||
tmp = tmp.replace('REPLACE_B', markdown_convertion(b))
|
||||
self.html_string += tmp
|
||||
|
||||
|
||||
def save_file(self, file_name):
|
||||
with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
|
||||
f.write(self.html_string.encode('utf-8', 'ignore').decode())
|
||||
|
||||
def get_plugin_arg(plugin_kwargs, key, default):
|
||||
# 如果参数是空的
|
||||
if (key in plugin_kwargs) and (plugin_kwargs[key] == ""): plugin_kwargs.pop(key)
|
||||
# 正常情况
|
||||
return plugin_kwargs.get(key, default)
|
||||
|
||||
127
crazy_functions/diagram_fns/file_tree.py
Normal file
127
crazy_functions/diagram_fns/file_tree.py
Normal file
@@ -0,0 +1,127 @@
|
||||
import os
|
||||
from textwrap import indent
|
||||
from loguru import logger
|
||||
|
||||
class FileNode:
|
||||
def __init__(self, name, build_manifest=False):
|
||||
self.name = name
|
||||
self.children = []
|
||||
self.is_leaf = False
|
||||
self.level = 0
|
||||
self.parenting_ship = []
|
||||
self.comment = ""
|
||||
self.comment_maxlen_show = 50
|
||||
self.build_manifest = build_manifest
|
||||
self.manifest = {}
|
||||
|
||||
@staticmethod
|
||||
def add_linebreaks_at_spaces(string, interval=10):
|
||||
return '\n'.join(string[i:i+interval] for i in range(0, len(string), interval))
|
||||
|
||||
def sanitize_comment(self, comment):
|
||||
if len(comment) > self.comment_maxlen_show: suf = '...'
|
||||
else: suf = ''
|
||||
comment = comment[:self.comment_maxlen_show]
|
||||
comment = comment.replace('\"', '').replace('`', '').replace('\n', '').replace('`', '').replace('$', '')
|
||||
comment = self.add_linebreaks_at_spaces(comment, 10)
|
||||
return '`' + comment + suf + '`'
|
||||
|
||||
def add_file(self, file_path, file_comment):
|
||||
directory_names, file_name = os.path.split(file_path)
|
||||
current_node = self
|
||||
level = 1
|
||||
if directory_names == "":
|
||||
new_node = FileNode(file_name)
|
||||
self.manifest[file_path] = new_node
|
||||
current_node.children.append(new_node)
|
||||
new_node.is_leaf = True
|
||||
new_node.comment = self.sanitize_comment(file_comment)
|
||||
new_node.level = level
|
||||
current_node = new_node
|
||||
else:
|
||||
dnamesplit = directory_names.split(os.sep)
|
||||
for i, directory_name in enumerate(dnamesplit):
|
||||
found_child = False
|
||||
level += 1
|
||||
for child in current_node.children:
|
||||
if child.name == directory_name:
|
||||
current_node = child
|
||||
found_child = True
|
||||
break
|
||||
if not found_child:
|
||||
new_node = FileNode(directory_name)
|
||||
current_node.children.append(new_node)
|
||||
new_node.level = level - 1
|
||||
current_node = new_node
|
||||
term = FileNode(file_name)
|
||||
self.manifest[file_path] = term
|
||||
term.level = level
|
||||
term.comment = self.sanitize_comment(file_comment)
|
||||
term.is_leaf = True
|
||||
current_node.children.append(term)
|
||||
|
||||
def print_files_recursively(self, level=0, code="R0"):
|
||||
logger.info(' '*level + self.name + ' ' + str(self.is_leaf) + ' ' + str(self.level))
|
||||
for j, child in enumerate(self.children):
|
||||
child.print_files_recursively(level=level+1, code=code+str(j))
|
||||
self.parenting_ship.extend(child.parenting_ship)
|
||||
p1 = f"""{code}[\"🗎{self.name}\"]""" if self.is_leaf else f"""{code}[[\"📁{self.name}\"]]"""
|
||||
p2 = """ --> """
|
||||
p3 = f"""{code+str(j)}[\"🗎{child.name}\"]""" if child.is_leaf else f"""{code+str(j)}[[\"📁{child.name}\"]]"""
|
||||
edge_code = p1 + p2 + p3
|
||||
if edge_code in self.parenting_ship:
|
||||
continue
|
||||
self.parenting_ship.append(edge_code)
|
||||
if self.comment != "":
|
||||
pc1 = f"""{code}[\"🗎{self.name}\"]""" if self.is_leaf else f"""{code}[[\"📁{self.name}\"]]"""
|
||||
pc2 = f""" -.-x """
|
||||
pc3 = f"""C{code}[\"{self.comment}\"]:::Comment"""
|
||||
edge_code = pc1 + pc2 + pc3
|
||||
self.parenting_ship.append(edge_code)
|
||||
|
||||
|
||||
MERMAID_TEMPLATE = r"""
|
||||
```mermaid
|
||||
flowchart LR
|
||||
%% <gpt_academic_hide_mermaid_code> 一个特殊标记,用于在生成mermaid图表时隐藏代码块
|
||||
classDef Comment stroke-dasharray: 5 5
|
||||
subgraph {graph_name}
|
||||
{relationship}
|
||||
end
|
||||
```
|
||||
"""
|
||||
|
||||
def build_file_tree_mermaid_diagram(file_manifest, file_comments, graph_name):
|
||||
# Create the root node
|
||||
file_tree_struct = FileNode("root")
|
||||
# Build the tree structure
|
||||
for file_path, file_comment in zip(file_manifest, file_comments):
|
||||
file_tree_struct.add_file(file_path, file_comment)
|
||||
file_tree_struct.print_files_recursively()
|
||||
cc = "\n".join(file_tree_struct.parenting_ship)
|
||||
ccc = indent(cc, prefix=" "*8)
|
||||
return MERMAID_TEMPLATE.format(graph_name=graph_name, relationship=ccc)
|
||||
|
||||
if __name__ == "__main__":
|
||||
# File manifest
|
||||
file_manifest = [
|
||||
"cradle_void_terminal.ipynb",
|
||||
"tests/test_utils.py",
|
||||
"tests/test_plugins.py",
|
||||
"tests/test_llms.py",
|
||||
"config.py",
|
||||
"build/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/model_weights_0.bin",
|
||||
"crazy_functions/latex_fns/latex_actions.py",
|
||||
"crazy_functions/latex_fns/latex_toolbox.py"
|
||||
]
|
||||
file_comments = [
|
||||
"根据位置和名称,可能是一个模块的初始化文件根据位置和名称,可能是一个模块的初始化文件根据位置和名称,可能是一个模块的初始化文件",
|
||||
"包含一些用于文本处理和模型微调的函数和装饰器包含一些用于文本处理和模型微调的函数和装饰器包含一些用于文本处理和模型微调的函数和装饰器",
|
||||
"用于构建HTML报告的类和方法用于构建HTML报告的类和方法用于构建HTML报告的类和方法",
|
||||
"包含了用于文本切分的函数,以及处理PDF文件的示例代码包含了用于文本切分的函数,以及处理PDF文件的示例代码包含了用于文本切分的函数,以及处理PDF文件的示例代码",
|
||||
"用于解析和翻译PDF文件的功能和相关辅助函数用于解析和翻译PDF文件的功能和相关辅助函数用于解析和翻译PDF文件的功能和相关辅助函数",
|
||||
"是一个包的初始化文件,用于初始化包的属性和导入模块是一个包的初始化文件,用于初始化包的属性和导入模块是一个包的初始化文件,用于初始化包的属性和导入模块",
|
||||
"用于加载和分割文件中的文本的通用文件加载器用于加载和分割文件中的文本的通用文件加载器用于加载和分割文件中的文本的通用文件加载器",
|
||||
"包含了用于构建和管理向量数据库的函数和类包含了用于构建和管理向量数据库的函数和类包含了用于构建和管理向量数据库的函数和类",
|
||||
]
|
||||
logger.info(build_file_tree_mermaid_diagram(file_manifest, file_comments, "项目文件树"))
|
||||
42
crazy_functions/game_fns/game_ascii_art.py
Normal file
42
crazy_functions/game_fns/game_ascii_art.py
Normal file
@@ -0,0 +1,42 @@
|
||||
from toolbox import CatchException, update_ui, update_ui_latest_msg
|
||||
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
|
||||
import random
|
||||
|
||||
|
||||
class MiniGame_ASCII_Art(GptAcademicGameBaseState):
|
||||
def step(self, prompt, chatbot, history):
|
||||
if self.step_cnt == 0:
|
||||
chatbot.append(["我画你猜(动物)", "请稍等..."])
|
||||
else:
|
||||
if prompt.strip() == 'exit':
|
||||
self.delete_game = True
|
||||
yield from update_ui_latest_msg(lastmsg=f"谜底是{self.obj},游戏结束。", chatbot=chatbot, history=history, delay=0.)
|
||||
return
|
||||
chatbot.append([prompt, ""])
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
if self.step_cnt == 0:
|
||||
self.lock_plugin(chatbot)
|
||||
self.cur_task = 'draw'
|
||||
|
||||
if self.cur_task == 'draw':
|
||||
avail_obj = ["狗","猫","鸟","鱼","老鼠","蛇"]
|
||||
self.obj = random.choice(avail_obj)
|
||||
inputs = "I want to play a game called Guess the ASCII art. You can draw the ASCII art and I will try to guess it. " + \
|
||||
f"This time you draw a {self.obj}. Note that you must not indicate what you have draw in the text, and you should only produce the ASCII art wrapped by ```. "
|
||||
raw_res = predict_no_ui_long_connection(inputs=inputs, llm_kwargs=self.llm_kwargs, history=[], sys_prompt="")
|
||||
self.cur_task = 'identify user guess'
|
||||
res = get_code_block(raw_res)
|
||||
history += ['', f'the answer is {self.obj}', inputs, res]
|
||||
yield from update_ui_latest_msg(lastmsg=res, chatbot=chatbot, history=history, delay=0.)
|
||||
|
||||
elif self.cur_task == 'identify user guess':
|
||||
if is_same_thing(self.obj, prompt, self.llm_kwargs):
|
||||
self.delete_game = True
|
||||
yield from update_ui_latest_msg(lastmsg="你猜对了!", chatbot=chatbot, history=history, delay=0.)
|
||||
else:
|
||||
self.cur_task = 'identify user guess'
|
||||
yield from update_ui_latest_msg(lastmsg="猜错了,再试试,输入“exit”获取答案。", chatbot=chatbot, history=history, delay=0.)
|
||||
212
crazy_functions/game_fns/game_interactive_story.py
Normal file
212
crazy_functions/game_fns/game_interactive_story.py
Normal file
@@ -0,0 +1,212 @@
|
||||
prompts_hs = """ 请以“{headstart}”为开头,编写一个小说的第一幕。
|
||||
|
||||
- 尽量短,不要包含太多情节,因为你接下来将会与用户互动续写下面的情节,要留出足够的互动空间。
|
||||
- 出现人物时,给出人物的名字。
|
||||
- 积极地运用环境描写、人物描写等手法,让读者能够感受到你的故事世界。
|
||||
- 积极地运用修辞手法,比如比喻、拟人、排比、对偶、夸张等等。
|
||||
- 字数要求:第一幕的字数少于300字,且少于2个段落。
|
||||
"""
|
||||
|
||||
prompts_interact = """ 小说的前文回顾:
|
||||
「
|
||||
{previously_on_story}
|
||||
」
|
||||
|
||||
你是一个作家,根据以上的情节,给出4种不同的后续剧情发展方向,每个发展方向都精明扼要地用一句话说明。稍后,我将在这4个选择中,挑选一种剧情发展。
|
||||
|
||||
输出格式例如:
|
||||
1. 后续剧情发展1
|
||||
2. 后续剧情发展2
|
||||
3. 后续剧情发展3
|
||||
4. 后续剧情发展4
|
||||
"""
|
||||
|
||||
|
||||
prompts_resume = """小说的前文回顾:
|
||||
「
|
||||
{previously_on_story}
|
||||
」
|
||||
|
||||
你是一个作家,我们正在互相讨论,确定后续剧情的发展。
|
||||
在以下的剧情发展中,
|
||||
「
|
||||
{choice}
|
||||
」
|
||||
我认为更合理的是:{user_choice}。
|
||||
请在前文的基础上(不要重复前文),围绕我选定的剧情情节,编写小说的下一幕。
|
||||
|
||||
- 禁止杜撰不符合我选择的剧情。
|
||||
- 尽量短,不要包含太多情节,因为你接下来将会与用户互动续写下面的情节,要留出足够的互动空间。
|
||||
- 不要重复前文。
|
||||
- 出现人物时,给出人物的名字。
|
||||
- 积极地运用环境描写、人物描写等手法,让读者能够感受到你的故事世界。
|
||||
- 积极地运用修辞手法,比如比喻、拟人、排比、对偶、夸张等等。
|
||||
- 小说的下一幕字数少于300字,且少于2个段落。
|
||||
"""
|
||||
|
||||
|
||||
prompts_terminate = """小说的前文回顾:
|
||||
「
|
||||
{previously_on_story}
|
||||
」
|
||||
|
||||
你是一个作家,我们正在互相讨论,确定后续剧情的发展。
|
||||
现在,故事该结束了,我认为最合理的故事结局是:{user_choice}。
|
||||
|
||||
请在前文的基础上(不要重复前文),编写小说的最后一幕。
|
||||
|
||||
- 不要重复前文。
|
||||
- 出现人物时,给出人物的名字。
|
||||
- 积极地运用环境描写、人物描写等手法,让读者能够感受到你的故事世界。
|
||||
- 积极地运用修辞手法,比如比喻、拟人、排比、对偶、夸张等等。
|
||||
- 字数要求:最后一幕的字数少于1000字。
|
||||
"""
|
||||
|
||||
|
||||
from toolbox import CatchException, update_ui, update_ui_latest_msg
|
||||
from crazy_functions.multi_stage.multi_stage_utils import GptAcademicGameBaseState
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.game_fns.game_utils import get_code_block, is_same_thing
|
||||
import random
|
||||
|
||||
|
||||
class MiniGame_ResumeStory(GptAcademicGameBaseState):
|
||||
story_headstart = [
|
||||
'先行者知道,他现在是全宇宙中唯一的一个人了。',
|
||||
'深夜,一个年轻人穿过天安门广场向纪念堂走去。在二十二世纪编年史中,计算机把他的代号定为M102。',
|
||||
'他知道,这最后一课要提前讲了。又一阵剧痛从肝部袭来,几乎使他晕厥过去。',
|
||||
'在距地球五万光年的远方,在银河系的中心,一场延续了两万年的星际战争已接近尾声。那里的太空中渐渐隐现出一个方形区域,仿佛灿烂的群星的背景被剪出一个方口。',
|
||||
'伊依一行三人乘坐一艘游艇在南太平洋上做吟诗航行,他们的目的地是南极,如果几天后能顺利到达那里,他们将钻出地壳去看诗云。',
|
||||
'很多人生来就会莫名其妙地迷上一样东西,仿佛他的出生就是要和这东西约会似的,正是这样,圆圆迷上了肥皂泡。'
|
||||
]
|
||||
|
||||
|
||||
def begin_game_step_0(self, prompt, chatbot, history):
|
||||
# init game at step 0
|
||||
self.headstart = random.choice(self.story_headstart)
|
||||
self.story = []
|
||||
chatbot.append(["互动写故事", f"这次的故事开头是:{self.headstart}"])
|
||||
self.sys_prompt_ = '你是一个想象力丰富的杰出作家。正在与你的朋友互动,一起写故事,因此你每次写的故事段落应少于300字(结局除外)。'
|
||||
|
||||
|
||||
def generate_story_image(self, story_paragraph):
|
||||
try:
|
||||
from crazy_functions.Image_Generate import gen_image
|
||||
prompt_ = predict_no_ui_long_connection(inputs=story_paragraph, llm_kwargs=self.llm_kwargs, history=[], sys_prompt='你需要根据用户给出的小说段落,进行简短的环境描写。要求:80字以内。')
|
||||
image_url, image_path = gen_image(self.llm_kwargs, prompt_, '512x512', model="dall-e-2", quality='standard', style='natural')
|
||||
return f'<br/><div align="center"><img src="file={image_path}"></div>'
|
||||
except:
|
||||
return ''
|
||||
|
||||
def step(self, prompt, chatbot, history):
|
||||
|
||||
"""
|
||||
首先,处理游戏初始化等特殊情况
|
||||
"""
|
||||
if self.step_cnt == 0:
|
||||
self.begin_game_step_0(prompt, chatbot, history)
|
||||
self.lock_plugin(chatbot)
|
||||
self.cur_task = 'head_start'
|
||||
else:
|
||||
if prompt.strip() == 'exit' or prompt.strip() == '结束剧情':
|
||||
# should we terminate game here?
|
||||
self.delete_game = True
|
||||
yield from update_ui_latest_msg(lastmsg=f"游戏结束。", chatbot=chatbot, history=history, delay=0.)
|
||||
return
|
||||
if '剧情收尾' in prompt:
|
||||
self.cur_task = 'story_terminate'
|
||||
# # well, game resumes
|
||||
# chatbot.append([prompt, ""])
|
||||
# update ui, don't keep the user waiting
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
|
||||
|
||||
"""
|
||||
处理游戏的主体逻辑
|
||||
"""
|
||||
if self.cur_task == 'head_start':
|
||||
"""
|
||||
这是游戏的第一步
|
||||
"""
|
||||
inputs_ = prompts_hs.format(headstart=self.headstart)
|
||||
history_ = []
|
||||
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs_, '故事开头', self.llm_kwargs,
|
||||
chatbot, history_, self.sys_prompt_
|
||||
)
|
||||
self.story.append(story_paragraph)
|
||||
# # 配图
|
||||
yield from update_ui_latest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
|
||||
yield from update_ui_latest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
|
||||
|
||||
# # 构建后续剧情引导
|
||||
previously_on_story = ""
|
||||
for s in self.story:
|
||||
previously_on_story += s + '\n'
|
||||
inputs_ = prompts_interact.format(previously_on_story=previously_on_story)
|
||||
history_ = []
|
||||
self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs_, '请在以下几种故事走向中,选择一种(当然,您也可以选择给出其他故事走向):', self.llm_kwargs,
|
||||
chatbot,
|
||||
history_,
|
||||
self.sys_prompt_
|
||||
)
|
||||
self.cur_task = 'user_choice'
|
||||
|
||||
|
||||
elif self.cur_task == 'user_choice':
|
||||
"""
|
||||
根据用户的提示,确定故事的下一步
|
||||
"""
|
||||
if '请在以下几种故事走向中,选择一种' in chatbot[-1][0]: chatbot.pop(-1)
|
||||
previously_on_story = ""
|
||||
for s in self.story:
|
||||
previously_on_story += s + '\n'
|
||||
inputs_ = prompts_resume.format(previously_on_story=previously_on_story, choice=self.next_choices, user_choice=prompt)
|
||||
history_ = []
|
||||
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs_, f'下一段故事(您的选择是:{prompt})。', self.llm_kwargs,
|
||||
chatbot, history_, self.sys_prompt_
|
||||
)
|
||||
self.story.append(story_paragraph)
|
||||
# # 配图
|
||||
yield from update_ui_latest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
|
||||
yield from update_ui_latest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
|
||||
|
||||
# # 构建后续剧情引导
|
||||
previously_on_story = ""
|
||||
for s in self.story:
|
||||
previously_on_story += s + '\n'
|
||||
inputs_ = prompts_interact.format(previously_on_story=previously_on_story)
|
||||
history_ = []
|
||||
self.next_choices = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs_,
|
||||
'请在以下几种故事走向中,选择一种。当然,您也可以给出您心中的其他故事走向。另外,如果您希望剧情立即收尾,请输入剧情走向,并以“剧情收尾”四个字提示程序。', self.llm_kwargs,
|
||||
chatbot,
|
||||
history_,
|
||||
self.sys_prompt_
|
||||
)
|
||||
self.cur_task = 'user_choice'
|
||||
|
||||
|
||||
elif self.cur_task == 'story_terminate':
|
||||
"""
|
||||
根据用户的提示,确定故事的结局
|
||||
"""
|
||||
previously_on_story = ""
|
||||
for s in self.story:
|
||||
previously_on_story += s + '\n'
|
||||
inputs_ = prompts_terminate.format(previously_on_story=previously_on_story, user_choice=prompt)
|
||||
history_ = []
|
||||
story_paragraph = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs_, f'故事收尾(您的选择是:{prompt})。', self.llm_kwargs,
|
||||
chatbot, history_, self.sys_prompt_
|
||||
)
|
||||
# # 配图
|
||||
yield from update_ui_latest_msg(lastmsg=story_paragraph + '<br/>正在生成插图中 ...', chatbot=chatbot, history=history, delay=0.)
|
||||
yield from update_ui_latest_msg(lastmsg=story_paragraph + '<br/>'+ self.generate_story_image(story_paragraph), chatbot=chatbot, history=history, delay=0.)
|
||||
|
||||
# terminate game
|
||||
self.delete_game = True
|
||||
return
|
||||
35
crazy_functions/game_fns/game_utils.py
Normal file
35
crazy_functions/game_fns/game_utils.py
Normal file
@@ -0,0 +1,35 @@
|
||||
|
||||
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
def get_code_block(reply):
|
||||
import re
|
||||
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
|
||||
matches = re.findall(pattern, reply) # find all code blocks in text
|
||||
if len(matches) == 1:
|
||||
return "```" + matches[0] + "```" # code block
|
||||
raise RuntimeError("GPT is not generating proper code.")
|
||||
|
||||
def is_same_thing(a, b, llm_kwargs):
|
||||
from pydantic import BaseModel, Field
|
||||
class IsSameThing(BaseModel):
|
||||
is_same_thing: bool = Field(description="determine whether two objects are same thing.", default=False)
|
||||
|
||||
def run_gpt_fn(inputs, sys_prompt, history=[]):
|
||||
return predict_no_ui_long_connection(
|
||||
inputs=inputs, llm_kwargs=llm_kwargs,
|
||||
history=history, sys_prompt=sys_prompt, observe_window=[]
|
||||
)
|
||||
|
||||
gpt_json_io = GptJsonIO(IsSameThing)
|
||||
inputs_01 = "Identity whether the user input and the target is the same thing: \n target object: {a} \n user input object: {b} \n\n\n".format(a=a, b=b)
|
||||
inputs_01 += "\n\n\n Note that the user may describe the target object with a different language, e.g. cat and 猫 are the same thing."
|
||||
analyze_res_cot_01 = run_gpt_fn(inputs_01, "", [])
|
||||
|
||||
inputs_02 = inputs_01 + gpt_json_io.format_instructions
|
||||
analyze_res = run_gpt_fn(inputs_02, "", [inputs_01, analyze_res_cot_01])
|
||||
|
||||
try:
|
||||
res = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
|
||||
return res.is_same_thing
|
||||
except JsonStringError as e:
|
||||
return False
|
||||
70
crazy_functions/gen_fns/gen_fns_shared.py
Normal file
70
crazy_functions/gen_fns/gen_fns_shared.py
Normal file
@@ -0,0 +1,70 @@
|
||||
import time
|
||||
import importlib
|
||||
from toolbox import trimmed_format_exc, gen_time_str, get_log_folder
|
||||
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, is_the_upload_folder
|
||||
from toolbox import promote_file_to_downloadzone, get_log_folder, update_ui_latest_msg
|
||||
import multiprocessing
|
||||
|
||||
def get_class_name(class_string):
|
||||
import re
|
||||
# Use regex to extract the class name
|
||||
class_name = re.search(r'class (\w+)\(', class_string).group(1)
|
||||
return class_name
|
||||
|
||||
def try_make_module(code, chatbot):
|
||||
module_file = 'gpt_fn_' + gen_time_str().replace('-','_')
|
||||
fn_path = f'{get_log_folder(plugin_name="gen_plugin_verify")}/{module_file}.py'
|
||||
with open(fn_path, 'w', encoding='utf8') as f: f.write(code)
|
||||
promote_file_to_downloadzone(fn_path, chatbot=chatbot)
|
||||
class_name = get_class_name(code)
|
||||
manager = multiprocessing.Manager()
|
||||
return_dict = manager.dict()
|
||||
p = multiprocessing.Process(target=is_function_successfully_generated, args=(fn_path, class_name, return_dict))
|
||||
# only has 10 seconds to run
|
||||
p.start(); p.join(timeout=10)
|
||||
if p.is_alive(): p.terminate(); p.join()
|
||||
p.close()
|
||||
return return_dict["success"], return_dict['traceback']
|
||||
|
||||
# check is_function_successfully_generated
|
||||
def is_function_successfully_generated(fn_path, class_name, return_dict):
|
||||
return_dict['success'] = False
|
||||
return_dict['traceback'] = ""
|
||||
try:
|
||||
# Create a spec for the module
|
||||
module_spec = importlib.util.spec_from_file_location('example_module', fn_path)
|
||||
# Load the module
|
||||
example_module = importlib.util.module_from_spec(module_spec)
|
||||
module_spec.loader.exec_module(example_module)
|
||||
# Now you can use the module
|
||||
some_class = getattr(example_module, class_name)
|
||||
# Now you can create an instance of the class
|
||||
instance = some_class()
|
||||
return_dict['success'] = True
|
||||
return
|
||||
except:
|
||||
return_dict['traceback'] = trimmed_format_exc()
|
||||
return
|
||||
|
||||
def subprocess_worker(code, file_path, return_dict):
|
||||
return_dict['result'] = None
|
||||
return_dict['success'] = False
|
||||
return_dict['traceback'] = ""
|
||||
try:
|
||||
module_file = 'gpt_fn_' + gen_time_str().replace('-','_')
|
||||
fn_path = f'{get_log_folder(plugin_name="gen_plugin_run")}/{module_file}.py'
|
||||
with open(fn_path, 'w', encoding='utf8') as f: f.write(code)
|
||||
class_name = get_class_name(code)
|
||||
# Create a spec for the module
|
||||
module_spec = importlib.util.spec_from_file_location('example_module', fn_path)
|
||||
# Load the module
|
||||
example_module = importlib.util.module_from_spec(module_spec)
|
||||
module_spec.loader.exec_module(example_module)
|
||||
# Now you can use the module
|
||||
some_class = getattr(example_module, class_name)
|
||||
# Now you can create an instance of the class
|
||||
instance = some_class()
|
||||
return_dict['result'] = instance.run(file_path)
|
||||
return_dict['success'] = True
|
||||
except:
|
||||
return_dict['traceback'] = trimmed_format_exc()
|
||||
37
crazy_functions/ipc_fns/mp.py
Normal file
37
crazy_functions/ipc_fns/mp.py
Normal file
@@ -0,0 +1,37 @@
|
||||
import platform
|
||||
import pickle
|
||||
import multiprocessing
|
||||
|
||||
def run_in_subprocess_wrapper_func(v_args):
|
||||
func, args, kwargs, return_dict, exception_dict = pickle.loads(v_args)
|
||||
import sys
|
||||
try:
|
||||
result = func(*args, **kwargs)
|
||||
return_dict['result'] = result
|
||||
except Exception as e:
|
||||
exc_info = sys.exc_info()
|
||||
exception_dict['exception'] = exc_info
|
||||
|
||||
def run_in_subprocess_with_timeout(func, timeout=60):
|
||||
if platform.system() == 'Linux':
|
||||
def wrapper(*args, **kwargs):
|
||||
return_dict = multiprocessing.Manager().dict()
|
||||
exception_dict = multiprocessing.Manager().dict()
|
||||
v_args = pickle.dumps((func, args, kwargs, return_dict, exception_dict))
|
||||
process = multiprocessing.Process(target=run_in_subprocess_wrapper_func, args=(v_args,))
|
||||
process.start()
|
||||
process.join(timeout)
|
||||
if process.is_alive():
|
||||
process.terminate()
|
||||
raise TimeoutError(f'功能单元{str(func)}未能在规定时间内完成任务')
|
||||
process.close()
|
||||
if 'exception' in exception_dict:
|
||||
# ooops, the subprocess ran into an exception
|
||||
exc_info = exception_dict['exception']
|
||||
raise exc_info[1].with_traceback(exc_info[2])
|
||||
if 'result' in return_dict.keys():
|
||||
# If the subprocess ran successfully, return the result
|
||||
return return_dict['result']
|
||||
return wrapper
|
||||
else:
|
||||
return func
|
||||
111
crazy_functions/json_fns/pydantic_io.py
Normal file
111
crazy_functions/json_fns/pydantic_io.py
Normal file
@@ -0,0 +1,111 @@
|
||||
"""
|
||||
https://github.com/langchain-ai/langchain/blob/master/docs/extras/modules/model_io/output_parsers/pydantic.ipynb
|
||||
|
||||
Example 1.
|
||||
|
||||
# Define your desired data structure.
|
||||
class Joke(BaseModel):
|
||||
setup: str = Field(description="question to set up a joke")
|
||||
punchline: str = Field(description="answer to resolve the joke")
|
||||
|
||||
# You can add custom validation logic easily with Pydantic.
|
||||
@validator("setup")
|
||||
def question_ends_with_question_mark(cls, field):
|
||||
if field[-1] != "?":
|
||||
raise ValueError("Badly formed question!")
|
||||
return field
|
||||
|
||||
|
||||
Example 2.
|
||||
|
||||
# Here's another example, but with a compound typed field.
|
||||
class Actor(BaseModel):
|
||||
name: str = Field(description="name of an actor")
|
||||
film_names: List[str] = Field(description="list of names of films they starred in")
|
||||
"""
|
||||
|
||||
import json, re
|
||||
from loguru import logger as logging
|
||||
|
||||
PYDANTIC_FORMAT_INSTRUCTIONS = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
|
||||
|
||||
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
|
||||
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
|
||||
|
||||
Here is the output schema:
|
||||
```
|
||||
{schema}
|
||||
```"""
|
||||
|
||||
|
||||
PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
|
||||
```
|
||||
{schema}
|
||||
```"""
|
||||
|
||||
class JsonStringError(Exception): ...
|
||||
|
||||
class GptJsonIO():
|
||||
|
||||
def __init__(self, schema, example_instruction=True):
|
||||
self.pydantic_object = schema
|
||||
self.example_instruction = example_instruction
|
||||
self.format_instructions = self.generate_format_instructions()
|
||||
|
||||
def generate_format_instructions(self):
|
||||
schema = self.pydantic_object.schema()
|
||||
|
||||
# Remove extraneous fields.
|
||||
reduced_schema = schema
|
||||
if "title" in reduced_schema:
|
||||
del reduced_schema["title"]
|
||||
if "type" in reduced_schema:
|
||||
del reduced_schema["type"]
|
||||
# Ensure json in context is well-formed with double quotes.
|
||||
schema_str = json.dumps(reduced_schema)
|
||||
if self.example_instruction:
|
||||
return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)
|
||||
else:
|
||||
return PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE.format(schema=schema_str)
|
||||
|
||||
def generate_output(self, text):
|
||||
# Greedy search for 1st json candidate.
|
||||
match = re.search(
|
||||
r"\{.*\}", text.strip(), re.MULTILINE | re.IGNORECASE | re.DOTALL
|
||||
)
|
||||
json_str = ""
|
||||
if match: json_str = match.group()
|
||||
json_object = json.loads(json_str, strict=False)
|
||||
final_object = self.pydantic_object.parse_obj(json_object)
|
||||
return final_object
|
||||
|
||||
def generate_repair_prompt(self, broken_json, error):
|
||||
prompt = "Fix a broken json string.\n\n" + \
|
||||
"(1) The broken json string need to fix is: \n\n" + \
|
||||
"```" + "\n" + \
|
||||
broken_json + "\n" + \
|
||||
"```" + "\n\n" + \
|
||||
"(2) The error message is: \n\n" + \
|
||||
error + "\n\n" + \
|
||||
"Now, fix this json string. \n\n"
|
||||
return prompt
|
||||
|
||||
def generate_output_auto_repair(self, response, gpt_gen_fn):
|
||||
"""
|
||||
response: string containing canidate json
|
||||
gpt_gen_fn: gpt_gen_fn(inputs, sys_prompt)
|
||||
"""
|
||||
try:
|
||||
result = self.generate_output(response)
|
||||
except Exception as e:
|
||||
try:
|
||||
logging.info(f'Repairing json:{response}')
|
||||
repair_prompt = self.generate_repair_prompt(broken_json = response, error=repr(e))
|
||||
result = self.generate_output(gpt_gen_fn(repair_prompt, self.format_instructions))
|
||||
logging.info('Repair json success.')
|
||||
except Exception as e:
|
||||
# 没辙了,放弃治疗
|
||||
logging.info('Repair json fail.')
|
||||
raise JsonStringError('Cannot repair json.', str(e))
|
||||
return result
|
||||
|
||||
26
crazy_functions/json_fns/select_tool.py
Normal file
26
crazy_functions/json_fns/select_tool.py
Normal file
@@ -0,0 +1,26 @@
|
||||
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
|
||||
|
||||
def structure_output(txt, prompt, err_msg, run_gpt_fn, pydantic_cls):
|
||||
gpt_json_io = GptJsonIO(pydantic_cls)
|
||||
analyze_res = run_gpt_fn(
|
||||
txt,
|
||||
sys_prompt=prompt + gpt_json_io.format_instructions
|
||||
)
|
||||
try:
|
||||
friend = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
|
||||
except JsonStringError as e:
|
||||
return None, err_msg
|
||||
|
||||
err_msg = ""
|
||||
return friend, err_msg
|
||||
|
||||
|
||||
def select_tool(prompt, run_gpt_fn, pydantic_cls):
|
||||
pydantic_cls_instance, err_msg = structure_output(
|
||||
txt=prompt,
|
||||
prompt="根据提示, 分析应该调用哪个工具函数\n\n",
|
||||
err_msg=f"不能理解该联系人",
|
||||
run_gpt_fn=run_gpt_fn,
|
||||
pydantic_cls=pydantic_cls
|
||||
)
|
||||
return pydantic_cls_instance, err_msg
|
||||
573
crazy_functions/latex_fns/latex_actions.py
Normal file
573
crazy_functions/latex_fns/latex_actions.py
Normal file
@@ -0,0 +1,573 @@
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import numpy as np
|
||||
from loguru import logger
|
||||
from toolbox import update_ui, update_ui_latest_msg, get_log_folder, gen_time_str
|
||||
from toolbox import get_conf, promote_file_to_downloadzone
|
||||
from crazy_functions.latex_fns.latex_toolbox import PRESERVE, TRANSFORM
|
||||
from crazy_functions.latex_fns.latex_toolbox import set_forbidden_text, set_forbidden_text_begin_end, set_forbidden_text_careful_brace
|
||||
from crazy_functions.latex_fns.latex_toolbox import reverse_forbidden_text_careful_brace, reverse_forbidden_text, convert_to_linklist, post_process
|
||||
from crazy_functions.latex_fns.latex_toolbox import fix_content, find_main_tex_file, merge_tex_files, compile_latex_with_timeout
|
||||
from crazy_functions.latex_fns.latex_toolbox import find_title_and_abs
|
||||
from crazy_functions.latex_fns.latex_pickle_io import objdump, objload
|
||||
|
||||
|
||||
pj = os.path.join
|
||||
|
||||
|
||||
def split_subprocess(txt, project_folder, return_dict, opts):
|
||||
"""
|
||||
break down latex file to a linked list,
|
||||
each node use a preserve flag to indicate whether it should
|
||||
be processed by GPT.
|
||||
"""
|
||||
text = txt
|
||||
mask = np.zeros(len(txt), dtype=np.uint8) + TRANSFORM
|
||||
|
||||
# 吸收title与作者以上的部分
|
||||
text, mask = set_forbidden_text(text, mask, r"^(.*?)\\maketitle", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, r"^(.*?)\\begin{document}", re.DOTALL)
|
||||
# 吸收iffalse注释
|
||||
text, mask = set_forbidden_text(text, mask, r"\\iffalse(.*?)\\fi", re.DOTALL)
|
||||
# 吸收在42行以内的begin-end组合
|
||||
text, mask = set_forbidden_text_begin_end(text, mask, r"\\begin\{([a-z\*]*)\}(.*?)\\end\{\1\}", re.DOTALL, limit_n_lines=42)
|
||||
# 吸收匿名公式
|
||||
text, mask = set_forbidden_text(text, mask, [ r"\$\$([^$]+)\$\$", r"\\\[.*?\\\]" ], re.DOTALL)
|
||||
# 吸收其他杂项
|
||||
text, mask = set_forbidden_text(text, mask, [ r"\\section\{(.*?)\}", r"\\section\*\{(.*?)\}", r"\\subsection\{(.*?)\}", r"\\subsubsection\{(.*?)\}" ])
|
||||
text, mask = set_forbidden_text(text, mask, [ r"\\bibliography\{(.*?)\}", r"\\bibliographystyle\{(.*?)\}" ])
|
||||
text, mask = set_forbidden_text(text, mask, r"\\begin\{thebibliography\}.*?\\end\{thebibliography\}", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, r"\\begin\{lstlisting\}(.*?)\\end\{lstlisting\}", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, r"\\begin\{wraptable\}(.*?)\\end\{wraptable\}", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{wrapfigure\}(.*?)\\end\{wrapfigure\}", r"\\begin\{wrapfigure\*\}(.*?)\\end\{wrapfigure\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{figure\}(.*?)\\end\{figure\}", r"\\begin\{figure\*\}(.*?)\\end\{figure\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{multline\}(.*?)\\end\{multline\}", r"\\begin\{multline\*\}(.*?)\\end\{multline\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{table\}(.*?)\\end\{table\}", r"\\begin\{table\*\}(.*?)\\end\{table\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{minipage\}(.*?)\\end\{minipage\}", r"\\begin\{minipage\*\}(.*?)\\end\{minipage\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{align\*\}(.*?)\\end\{align\*\}", r"\\begin\{align\}(.*?)\\end\{align\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{equation\}(.*?)\\end\{equation\}", r"\\begin\{equation\*\}(.*?)\\end\{equation\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\includepdf\[(.*?)\]\{(.*?)\}", r"\\clearpage", r"\\newpage", r"\\appendix", r"\\tableofcontents", r"\\include\{(.*?)\}"])
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\vspace\{(.*?)\}", r"\\hspace\{(.*?)\}", r"\\label\{(.*?)\}", r"\\begin\{(.*?)\}", r"\\end\{(.*?)\}", r"\\item "])
|
||||
text, mask = set_forbidden_text_careful_brace(text, mask, r"\\hl\{(.*?)\}", re.DOTALL)
|
||||
# reverse 操作必须放在最后
|
||||
text, mask = reverse_forbidden_text_careful_brace(text, mask, r"\\caption\{(.*?)\}", re.DOTALL, forbid_wrapper=True)
|
||||
text, mask = reverse_forbidden_text_careful_brace(text, mask, r"\\abstract\{(.*?)\}", re.DOTALL, forbid_wrapper=True)
|
||||
text, mask = reverse_forbidden_text(text, mask, r"\\begin\{abstract\}(.*?)\\end\{abstract\}", re.DOTALL, forbid_wrapper=True)
|
||||
root = convert_to_linklist(text, mask)
|
||||
|
||||
# 最后一步处理,增强稳健性
|
||||
root = post_process(root)
|
||||
|
||||
# 输出html调试文件,用红色标注处保留区(PRESERVE),用黑色标注转换区(TRANSFORM)
|
||||
with open(pj(project_folder, 'debug_log.html'), 'w', encoding='utf8') as f:
|
||||
segment_parts_for_gpt = []
|
||||
nodes = []
|
||||
node = root
|
||||
while True:
|
||||
nodes.append(node)
|
||||
show_html = node.string.replace('\n','<br/>')
|
||||
if not node.preserve:
|
||||
segment_parts_for_gpt.append(node.string)
|
||||
f.write(f'<p style="color:black;">#{node.range}{show_html}#</p>')
|
||||
else:
|
||||
f.write(f'<p style="color:red;">{show_html}</p>')
|
||||
node = node.next
|
||||
if node is None: break
|
||||
|
||||
for n in nodes: n.next = None # break
|
||||
return_dict['nodes'] = nodes
|
||||
return_dict['segment_parts_for_gpt'] = segment_parts_for_gpt
|
||||
return return_dict
|
||||
|
||||
class LatexPaperSplit():
|
||||
"""
|
||||
break down latex file to a linked list,
|
||||
each node use a preserve flag to indicate whether it should
|
||||
be processed by GPT.
|
||||
"""
|
||||
def __init__(self) -> None:
|
||||
self.nodes = None
|
||||
self.msg = "*{\\scriptsize\\textbf{警告:该PDF由GPT-Academic开源项目调用大语言模型+Latex翻译插件一键生成," + \
|
||||
"版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \
|
||||
"项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。"
|
||||
# 请您不要删除或修改这行警告,除非您是论文的原作者(如果您是论文原作者,欢迎加README中的QQ联系开发者)
|
||||
self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\"
|
||||
self.title = "unknown"
|
||||
self.abstract = "unknown"
|
||||
|
||||
def read_title_and_abstract(self, txt):
|
||||
try:
|
||||
title, abstract = find_title_and_abs(txt)
|
||||
if title is not None:
|
||||
self.title = title.replace('\n', ' ').replace('\\\\', ' ').replace(' ', '').replace(' ', '')
|
||||
if abstract is not None:
|
||||
self.abstract = abstract.replace('\n', ' ').replace('\\\\', ' ').replace(' ', '').replace(' ', '')
|
||||
except:
|
||||
pass
|
||||
|
||||
def merge_result(self, arr, mode, msg, buggy_lines=[], buggy_line_surgery_n_lines=10):
|
||||
"""
|
||||
Merge the result after the GPT process completed
|
||||
"""
|
||||
result_string = ""
|
||||
node_cnt = 0
|
||||
line_cnt = 0
|
||||
|
||||
for node in self.nodes:
|
||||
if node.preserve:
|
||||
line_cnt += node.string.count('\n')
|
||||
result_string += node.string
|
||||
else:
|
||||
translated_txt = fix_content(arr[node_cnt], node.string)
|
||||
begin_line = line_cnt
|
||||
end_line = line_cnt + translated_txt.count('\n')
|
||||
|
||||
# reverse translation if any error
|
||||
if any([begin_line-buggy_line_surgery_n_lines <= b_line <= end_line+buggy_line_surgery_n_lines for b_line in buggy_lines]):
|
||||
translated_txt = node.string
|
||||
|
||||
result_string += translated_txt
|
||||
node_cnt += 1
|
||||
line_cnt += translated_txt.count('\n')
|
||||
|
||||
if mode == 'translate_zh':
|
||||
pattern = re.compile(r'\\begin\{abstract\}.*\n')
|
||||
match = pattern.search(result_string)
|
||||
if not match:
|
||||
# match \abstract{xxxx}
|
||||
pattern_compile = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
|
||||
match = pattern_compile.search(result_string)
|
||||
position = match.regs[1][0]
|
||||
else:
|
||||
# match \begin{abstract}xxxx\end{abstract}
|
||||
position = match.end()
|
||||
result_string = result_string[:position] + self.msg + msg + self.msg_declare + result_string[position:]
|
||||
return result_string
|
||||
|
||||
|
||||
def split(self, txt, project_folder, opts):
|
||||
"""
|
||||
break down latex file to a linked list,
|
||||
each node use a preserve flag to indicate whether it should
|
||||
be processed by GPT.
|
||||
P.S. use multiprocessing to avoid timeout error
|
||||
"""
|
||||
import multiprocessing
|
||||
manager = multiprocessing.Manager()
|
||||
return_dict = manager.dict()
|
||||
p = multiprocessing.Process(
|
||||
target=split_subprocess,
|
||||
args=(txt, project_folder, return_dict, opts))
|
||||
p.start()
|
||||
p.join()
|
||||
p.close()
|
||||
self.nodes = return_dict['nodes']
|
||||
self.sp = return_dict['segment_parts_for_gpt']
|
||||
return self.sp
|
||||
|
||||
|
||||
class LatexPaperFileGroup():
|
||||
"""
|
||||
use tokenizer to break down text according to max_token_limit
|
||||
"""
|
||||
def __init__(self):
|
||||
self.file_paths = []
|
||||
self.file_contents = []
|
||||
self.sp_file_contents = []
|
||||
self.sp_file_index = []
|
||||
self.sp_file_tag = []
|
||||
# count_token
|
||||
from request_llms.bridge_all import model_info
|
||||
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||
self.get_token_num = get_token_num
|
||||
|
||||
def run_file_split(self, max_token_limit=1900):
|
||||
"""
|
||||
use tokenizer to break down text according to max_token_limit
|
||||
"""
|
||||
for index, file_content in enumerate(self.file_contents):
|
||||
if self.get_token_num(file_content) < max_token_limit:
|
||||
self.sp_file_contents.append(file_content)
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index])
|
||||
else:
|
||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
||||
segments = breakdown_text_to_satisfy_token_limit(file_content, max_token_limit)
|
||||
for j, segment in enumerate(segments):
|
||||
self.sp_file_contents.append(segment)
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
|
||||
|
||||
def merge_result(self):
|
||||
self.file_result = ["" for _ in range(len(self.file_paths))]
|
||||
for r, k in zip(self.sp_file_result, self.sp_file_index):
|
||||
self.file_result[k] += r
|
||||
|
||||
def write_result(self):
|
||||
manifest = []
|
||||
for path, res in zip(self.file_paths, self.file_result):
|
||||
with open(path + '.polish.tex', 'w', encoding='utf8') as f:
|
||||
manifest.append(path + '.polish.tex')
|
||||
f.write(res)
|
||||
return manifest
|
||||
|
||||
|
||||
def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, mode='proofread', switch_prompt=None, opts=[]):
|
||||
import time, os, re
|
||||
from ..crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from .latex_actions import LatexPaperFileGroup, LatexPaperSplit
|
||||
|
||||
# <-------- 寻找主tex文件 ---------->
|
||||
maintex = find_main_tex_file(file_manifest, mode)
|
||||
chatbot.append((f"定位主Latex文件", f'[Local Message] 分析结果:该项目的Latex主文件是{maintex}, 如果分析错误, 请立即终止程序, 删除或修改歧义文件, 然后重试。主程序即将开始, 请稍候。'))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
time.sleep(3)
|
||||
|
||||
# <-------- 读取Latex文件, 将多文件tex工程融合为一个巨型tex ---------->
|
||||
main_tex_basename = os.path.basename(maintex)
|
||||
assert main_tex_basename.endswith('.tex')
|
||||
main_tex_basename_bare = main_tex_basename[:-4]
|
||||
may_exist_bbl = pj(project_folder, f'{main_tex_basename_bare}.bbl')
|
||||
if os.path.exists(may_exist_bbl):
|
||||
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge.bbl'))
|
||||
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_{mode}.bbl'))
|
||||
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_diff.bbl'))
|
||||
|
||||
with open(maintex, 'r', encoding='utf-8', errors='replace') as f:
|
||||
content = f.read()
|
||||
merged_content = merge_tex_files(project_folder, content, mode)
|
||||
|
||||
with open(project_folder + '/merge.tex', 'w', encoding='utf-8', errors='replace') as f:
|
||||
f.write(merged_content)
|
||||
|
||||
# <-------- 精细切分latex文件 ---------->
|
||||
chatbot.append((f"Latex文件融合完成", f'[Local Message] 正在精细切分latex文件,这需要一段时间计算,文档越长耗时越长,请耐心等待。'))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
lps = LatexPaperSplit()
|
||||
lps.read_title_and_abstract(merged_content)
|
||||
res = lps.split(merged_content, project_folder, opts) # 消耗时间的函数
|
||||
# <-------- 拆分过长的latex片段 ---------->
|
||||
pfg = LatexPaperFileGroup()
|
||||
for index, r in enumerate(res):
|
||||
pfg.file_paths.append('segment-' + str(index))
|
||||
pfg.file_contents.append(r)
|
||||
|
||||
pfg.run_file_split(max_token_limit=1024)
|
||||
n_split = len(pfg.sp_file_contents)
|
||||
|
||||
# <-------- 根据需要切换prompt ---------->
|
||||
inputs_array, sys_prompt_array = switch_prompt(pfg, mode)
|
||||
inputs_show_user_array = [f"{mode} {f}" for f in pfg.sp_file_tag]
|
||||
|
||||
if os.path.exists(pj(project_folder,'temp.pkl')):
|
||||
|
||||
# <-------- 【仅调试】如果存在调试缓存文件,则跳过GPT请求环节 ---------->
|
||||
pfg = objload(file=pj(project_folder,'temp.pkl'))
|
||||
|
||||
else:
|
||||
# <-------- gpt 多线程请求 ---------->
|
||||
history_array = [[""] for _ in range(n_split)]
|
||||
# LATEX_EXPERIMENTAL, = get_conf('LATEX_EXPERIMENTAL')
|
||||
# if LATEX_EXPERIMENTAL:
|
||||
# paper_meta = f"The paper you processing is `{lps.title}`, a part of the abstraction is `{lps.abstract}`"
|
||||
# paper_meta_max_len = 888
|
||||
# history_array = [[ paper_meta[:paper_meta_max_len] + '...', "Understand, what should I do?"] for _ in range(n_split)]
|
||||
|
||||
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array=inputs_array,
|
||||
inputs_show_user_array=inputs_show_user_array,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history_array=history_array,
|
||||
sys_prompt_array=sys_prompt_array,
|
||||
# max_workers=5, # 并行任务数量限制, 最多同时执行5个, 其他的排队等待
|
||||
scroller_max_len = 40
|
||||
)
|
||||
|
||||
# <-------- 文本碎片重组为完整的tex片段 ---------->
|
||||
pfg.sp_file_result = []
|
||||
for i_say, gpt_say, orig_content in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], pfg.sp_file_contents):
|
||||
pfg.sp_file_result.append(gpt_say)
|
||||
pfg.merge_result()
|
||||
|
||||
# <-------- 临时存储用于调试 ---------->
|
||||
pfg.get_token_num = None
|
||||
objdump(pfg, file=pj(project_folder,'temp.pkl'))
|
||||
|
||||
write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder)
|
||||
|
||||
# <-------- 写出文件 ---------->
|
||||
model_name = llm_kwargs['llm_model'].replace('_', '\\_') # 替换LLM模型名称中的下划线为转义字符
|
||||
msg = f"当前大语言模型: {model_name},当前语言模型温度设定: {llm_kwargs['temperature']}。"
|
||||
final_tex = lps.merge_result(pfg.file_result, mode, msg)
|
||||
objdump((lps, pfg.file_result, mode, msg), file=pj(project_folder,'merge_result.pkl'))
|
||||
|
||||
with open(project_folder + f'/merge_{mode}.tex', 'w', encoding='utf-8', errors='replace') as f:
|
||||
if mode != 'translate_zh' or "binary" in final_tex: f.write(final_tex)
|
||||
|
||||
|
||||
# <-------- 整理结果, 退出 ---------->
|
||||
chatbot.append((f"完成了吗?", 'GPT结果已输出, 即将编译PDF'))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# <-------- 返回 ---------->
|
||||
return project_folder + f'/merge_{mode}.tex'
|
||||
|
||||
|
||||
def remove_buggy_lines(file_path, log_path, tex_name, tex_name_pure, n_fix, work_folder_modified, fixed_line=[]):
|
||||
try:
|
||||
with open(log_path, 'r', encoding='utf-8', errors='replace') as f:
|
||||
log = f.read()
|
||||
import re
|
||||
buggy_lines = re.findall(tex_name+':([0-9]{1,5}):', log)
|
||||
buggy_lines = [int(l) for l in buggy_lines]
|
||||
buggy_lines = sorted(buggy_lines)
|
||||
buggy_line = buggy_lines[0]-1
|
||||
logger.warning("reversing tex line that has errors", buggy_line)
|
||||
|
||||
# 重组,逆转出错的段落
|
||||
if buggy_line not in fixed_line:
|
||||
fixed_line.append(buggy_line)
|
||||
|
||||
lps, file_result, mode, msg = objload(file=pj(work_folder_modified,'merge_result.pkl'))
|
||||
final_tex = lps.merge_result(file_result, mode, msg, buggy_lines=fixed_line, buggy_line_surgery_n_lines=5*n_fix)
|
||||
|
||||
with open(pj(work_folder_modified, f"{tex_name_pure}_fix_{n_fix}.tex"), 'w', encoding='utf-8', errors='replace') as f:
|
||||
f.write(final_tex)
|
||||
|
||||
return True, f"{tex_name_pure}_fix_{n_fix}", buggy_lines
|
||||
except:
|
||||
logger.error("Fatal error occurred, but we cannot identify error, please download zip, read latex log, and compile manually.")
|
||||
return False, -1, [-1]
|
||||
|
||||
|
||||
def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_folder_original, work_folder_modified, work_folder, mode='default'):
|
||||
import os, time
|
||||
n_fix = 1
|
||||
fixed_line = []
|
||||
max_try = 32
|
||||
chatbot.append([f"正在编译PDF文档", f'编译已经开始。当前工作路径为{work_folder},如果程序停顿5分钟以上,请直接去该路径下取回翻译结果,或者重启之后再度尝试 ...']); yield from update_ui(chatbot=chatbot, history=history)
|
||||
chatbot.append([f"正在编译PDF文档", '...']); yield from update_ui(chatbot=chatbot, history=history); time.sleep(1); chatbot[-1] = list(chatbot[-1]) # 刷新界面
|
||||
yield from update_ui_latest_msg('编译已经开始...', chatbot, history) # 刷新Gradio前端界面
|
||||
# 检查是否需要使用xelatex
|
||||
def check_if_need_xelatex(tex_path):
|
||||
try:
|
||||
with open(tex_path, 'r', encoding='utf-8', errors='replace') as f:
|
||||
content = f.read(5000)
|
||||
# 检查是否有使用xelatex的宏包
|
||||
need_xelatex = any(
|
||||
pkg in content
|
||||
for pkg in ['fontspec', 'xeCJK', 'xetex', 'unicode-math', 'xltxtra', 'xunicode']
|
||||
)
|
||||
if need_xelatex:
|
||||
logger.info(f"检测到宏包需要xelatex编译, 切换至xelatex编译")
|
||||
else:
|
||||
logger.info(f"未检测到宏包需要xelatex编译, 使用pdflatex编译")
|
||||
return need_xelatex
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
# 根据编译器类型返回编译命令
|
||||
def get_compile_command(compiler, filename):
|
||||
compile_command = f'{compiler} -interaction=batchmode -file-line-error {filename}.tex'
|
||||
logger.info('Latex 编译指令: ' + compile_command)
|
||||
return compile_command
|
||||
|
||||
# 确定使用的编译器
|
||||
compiler = 'pdflatex'
|
||||
if check_if_need_xelatex(pj(work_folder_modified, f'{main_file_modified}.tex')):
|
||||
logger.info("检测到宏包需要xelatex编译,切换至xelatex编译")
|
||||
# Check if xelatex is installed
|
||||
try:
|
||||
import subprocess
|
||||
subprocess.run(['xelatex', '--version'], capture_output=True, check=True)
|
||||
compiler = 'xelatex'
|
||||
except (subprocess.CalledProcessError, FileNotFoundError):
|
||||
raise RuntimeError("检测到需要使用xelatex编译,但系统中未安装xelatex。请先安装texlive或其他提供xelatex的LaTeX发行版。")
|
||||
|
||||
while True:
|
||||
import os
|
||||
may_exist_bbl = pj(work_folder_modified, f'merge.bbl')
|
||||
target_bbl = pj(work_folder_modified, f'{main_file_modified}.bbl')
|
||||
if os.path.exists(may_exist_bbl) and not os.path.exists(target_bbl):
|
||||
shutil.copyfile(may_exist_bbl, target_bbl)
|
||||
|
||||
# https://stackoverflow.com/questions/738755/dont-make-me-manually-abort-a-latex-compile-when-theres-an-error
|
||||
yield from update_ui_latest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译原始PDF ...', chatbot, history) # 刷新Gradio前端界面
|
||||
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original)
|
||||
|
||||
yield from update_ui_latest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面
|
||||
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified)
|
||||
|
||||
if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')):
|
||||
# 只有第二步成功,才能继续下面的步骤
|
||||
yield from update_ui_latest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面
|
||||
if not os.path.exists(pj(work_folder_original, f'{main_file_original}.bbl')):
|
||||
ok = compile_latex_with_timeout(f'bibtex {main_file_original}.aux', work_folder_original)
|
||||
if not os.path.exists(pj(work_folder_modified, f'{main_file_modified}.bbl')):
|
||||
ok = compile_latex_with_timeout(f'bibtex {main_file_modified}.aux', work_folder_modified)
|
||||
|
||||
yield from update_ui_latest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译文献交叉引用 ...', chatbot, history) # 刷新Gradio前端界面
|
||||
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original)
|
||||
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified)
|
||||
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_original), work_folder_original)
|
||||
ok = compile_latex_with_timeout(get_compile_command(compiler, main_file_modified), work_folder_modified)
|
||||
|
||||
if mode!='translate_zh':
|
||||
yield from update_ui_latest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 使用latexdiff生成论文转化前后对比 ...', chatbot, history) # 刷新Gradio前端界面
|
||||
logger.info( f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex')
|
||||
ok = compile_latex_with_timeout(f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex', os.getcwd())
|
||||
|
||||
yield from update_ui_latest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 正在编译对比PDF ...', chatbot, history) # 刷新Gradio前端界面
|
||||
ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder)
|
||||
ok = compile_latex_with_timeout(f'bibtex merge_diff.aux', work_folder)
|
||||
ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder)
|
||||
ok = compile_latex_with_timeout(get_compile_command(compiler, 'merge_diff'), work_folder)
|
||||
|
||||
# <---------- 检查结果 ----------->
|
||||
results_ = ""
|
||||
original_pdf_success = os.path.exists(pj(work_folder_original, f'{main_file_original}.pdf'))
|
||||
modified_pdf_success = os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf'))
|
||||
diff_pdf_success = os.path.exists(pj(work_folder, f'merge_diff.pdf'))
|
||||
results_ += f"原始PDF编译是否成功: {original_pdf_success};"
|
||||
results_ += f"转化PDF编译是否成功: {modified_pdf_success};"
|
||||
results_ += f"对比PDF编译是否成功: {diff_pdf_success};"
|
||||
yield from update_ui_latest_msg(f'第{n_fix}编译结束:<br/>{results_}...', chatbot, history) # 刷新Gradio前端界面
|
||||
|
||||
if diff_pdf_success:
|
||||
result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path
|
||||
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
||||
if modified_pdf_success:
|
||||
yield from update_ui_latest_msg(f'转化PDF编译已经成功, 正在尝试生成对比PDF, 请稍候 ...', chatbot, history) # 刷新Gradio前端界面
|
||||
result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path
|
||||
origin_pdf = pj(work_folder_original, f'{main_file_original}.pdf') # get pdf path
|
||||
if os.path.exists(pj(work_folder, '..', 'translation')):
|
||||
shutil.copyfile(result_pdf, pj(work_folder, '..', 'translation', 'translate_zh.pdf'))
|
||||
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
||||
# 将两个PDF拼接
|
||||
if original_pdf_success:
|
||||
try:
|
||||
from .latex_toolbox import merge_pdfs
|
||||
concat_pdf = pj(work_folder_modified, f'comparison.pdf')
|
||||
merge_pdfs(origin_pdf, result_pdf, concat_pdf)
|
||||
if os.path.exists(pj(work_folder, '..', 'translation')):
|
||||
shutil.copyfile(concat_pdf, pj(work_folder, '..', 'translation', 'comparison.pdf'))
|
||||
promote_file_to_downloadzone(concat_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
||||
except Exception as e:
|
||||
logger.error(e)
|
||||
pass
|
||||
return True # 成功啦
|
||||
else:
|
||||
if n_fix>=max_try: break
|
||||
n_fix += 1
|
||||
can_retry, main_file_modified, buggy_lines = remove_buggy_lines(
|
||||
file_path=pj(work_folder_modified, f'{main_file_modified}.tex'),
|
||||
log_path=pj(work_folder_modified, f'{main_file_modified}.log'),
|
||||
tex_name=f'{main_file_modified}.tex',
|
||||
tex_name_pure=f'{main_file_modified}',
|
||||
n_fix=n_fix,
|
||||
work_folder_modified=work_folder_modified,
|
||||
fixed_line=fixed_line
|
||||
)
|
||||
yield from update_ui_latest_msg(f'由于最为关键的转化PDF编译失败, 将根据报错信息修正tex源文件并重试, 当前报错的latex代码处于第{buggy_lines}行 ...', chatbot, history) # 刷新Gradio前端界面
|
||||
if not can_retry: break
|
||||
|
||||
return False # 失败啦
|
||||
|
||||
|
||||
def write_html(sp_file_contents, sp_file_result, chatbot, project_folder):
|
||||
# write html
|
||||
try:
|
||||
import shutil
|
||||
from crazy_functions.pdf_fns.report_gen_html import construct_html
|
||||
from toolbox import gen_time_str
|
||||
ch = construct_html()
|
||||
orig = ""
|
||||
trans = ""
|
||||
final = []
|
||||
for c,r in zip(sp_file_contents, sp_file_result):
|
||||
final.append(c)
|
||||
final.append(r)
|
||||
for i, k in enumerate(final):
|
||||
if i%2==0:
|
||||
orig = k
|
||||
if i%2==1:
|
||||
trans = k
|
||||
ch.add_row(a=orig, b=trans)
|
||||
create_report_file_name = f"{gen_time_str()}.trans.html"
|
||||
res = ch.save_file(create_report_file_name)
|
||||
shutil.copyfile(res, pj(project_folder, create_report_file_name))
|
||||
promote_file_to_downloadzone(file=res, chatbot=chatbot)
|
||||
except:
|
||||
from toolbox import trimmed_format_exc
|
||||
logger.error('writing html result failed:', trimmed_format_exc())
|
||||
|
||||
|
||||
def upload_to_gptac_cloud_if_user_allow(chatbot, arxiv_id):
|
||||
try:
|
||||
# 如果用户允许,我们将arxiv论文PDF上传到GPTAC学术云
|
||||
from toolbox import map_file_to_sha256
|
||||
# 检查是否顺利,如果没有生成预期的文件,则跳过
|
||||
is_result_good = False
|
||||
for file_path in chatbot._cookies.get("files_to_promote", []):
|
||||
if file_path.endswith('translate_zh.pdf'):
|
||||
is_result_good = True
|
||||
if not is_result_good:
|
||||
return
|
||||
# 上传文件
|
||||
for file_path in chatbot._cookies.get("files_to_promote", []):
|
||||
align_name = None
|
||||
# normalized name
|
||||
for name in ['translate_zh.pdf', 'comparison.pdf']:
|
||||
if file_path.endswith(name): align_name = name
|
||||
# if match any align name
|
||||
if align_name:
|
||||
logger.info(f'Uploading to GPTAC cloud as the user has set `allow_cloud_io`: {file_path}')
|
||||
with open(file_path, 'rb') as f:
|
||||
import requests
|
||||
url = 'https://cloud-2.agent-matrix.com/arxiv_tf_paper_normal_upload'
|
||||
files = {'file': (align_name, f, 'application/octet-stream')}
|
||||
data = {
|
||||
'arxiv_id': arxiv_id,
|
||||
'file_hash': map_file_to_sha256(file_path),
|
||||
'language': 'zh',
|
||||
'trans_prompt': 'to_be_implemented',
|
||||
'llm_model': 'to_be_implemented',
|
||||
'llm_model_param': 'to_be_implemented',
|
||||
}
|
||||
resp = requests.post(url=url, files=files, data=data, timeout=30)
|
||||
logger.info(f'Uploading terminate ({resp.status_code})`: {file_path}')
|
||||
except:
|
||||
# 如果上传失败,不会中断程序,因为这是次要功能
|
||||
pass
|
||||
|
||||
def check_gptac_cloud(arxiv_id, chatbot):
|
||||
import requests
|
||||
success = False
|
||||
downloaded = []
|
||||
try:
|
||||
for pdf_target in ['translate_zh.pdf', 'comparison.pdf']:
|
||||
url = 'https://cloud-2.agent-matrix.com/arxiv_tf_paper_normal_exist'
|
||||
data = {
|
||||
'arxiv_id': arxiv_id,
|
||||
'name': pdf_target,
|
||||
}
|
||||
resp = requests.post(url=url, data=data)
|
||||
cache_hit_result = resp.text.strip('"')
|
||||
if cache_hit_result.startswith("http"):
|
||||
url = cache_hit_result
|
||||
logger.info(f'Downloading from GPTAC cloud: {url}')
|
||||
resp = requests.get(url=url, timeout=30)
|
||||
target = os.path.join(get_log_folder(plugin_name='gptac_cloud'), gen_time_str(), pdf_target)
|
||||
os.makedirs(os.path.dirname(target), exist_ok=True)
|
||||
with open(target, 'wb') as f:
|
||||
f.write(resp.content)
|
||||
new_path = promote_file_to_downloadzone(target, chatbot=chatbot)
|
||||
success = True
|
||||
downloaded.append(new_path)
|
||||
except:
|
||||
pass
|
||||
return success, downloaded
|
||||
48
crazy_functions/latex_fns/latex_pickle_io.py
Normal file
48
crazy_functions/latex_fns/latex_pickle_io.py
Normal file
@@ -0,0 +1,48 @@
|
||||
import pickle
|
||||
|
||||
|
||||
class SafeUnpickler(pickle.Unpickler):
|
||||
|
||||
def get_safe_classes(self):
|
||||
from crazy_functions.latex_fns.latex_actions import LatexPaperFileGroup, LatexPaperSplit
|
||||
from crazy_functions.latex_fns.latex_toolbox import LinkedListNode
|
||||
from numpy.core.multiarray import scalar
|
||||
from numpy import dtype
|
||||
# 定义允许的安全类
|
||||
safe_classes = {
|
||||
# 在这里添加其他安全的类
|
||||
'LatexPaperFileGroup': LatexPaperFileGroup,
|
||||
'LatexPaperSplit': LatexPaperSplit,
|
||||
'LinkedListNode': LinkedListNode,
|
||||
'scalar': scalar,
|
||||
'dtype': dtype,
|
||||
}
|
||||
return safe_classes
|
||||
|
||||
def find_class(self, module, name):
|
||||
# 只允许特定的类进行反序列化
|
||||
self.safe_classes = self.get_safe_classes()
|
||||
match_class_name = None
|
||||
for class_name in self.safe_classes.keys():
|
||||
if (class_name in f'{module}.{name}'):
|
||||
match_class_name = class_name
|
||||
if match_class_name is not None:
|
||||
return self.safe_classes[match_class_name]
|
||||
# 如果尝试加载未授权的类,则抛出异常
|
||||
raise pickle.UnpicklingError(f"Attempted to deserialize unauthorized class '{name}' from module '{module}'")
|
||||
|
||||
def objdump(obj, file="objdump.tmp"):
|
||||
|
||||
with open(file, "wb+") as f:
|
||||
pickle.dump(obj, f)
|
||||
return
|
||||
|
||||
|
||||
def objload(file="objdump.tmp"):
|
||||
import os
|
||||
|
||||
if not os.path.exists(file):
|
||||
return
|
||||
with open(file, "rb") as f:
|
||||
unpickler = SafeUnpickler(f)
|
||||
return unpickler.load()
|
||||
906
crazy_functions/latex_fns/latex_toolbox.py
Normal file
906
crazy_functions/latex_fns/latex_toolbox.py
Normal file
@@ -0,0 +1,906 @@
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import numpy as np
|
||||
from loguru import logger
|
||||
|
||||
PRESERVE = 0
|
||||
TRANSFORM = 1
|
||||
|
||||
pj = os.path.join
|
||||
|
||||
|
||||
class LinkedListNode:
|
||||
"""
|
||||
Linked List Node
|
||||
"""
|
||||
|
||||
def __init__(self, string, preserve=True) -> None:
|
||||
self.string = string
|
||||
self.preserve = preserve
|
||||
self.next = None
|
||||
self.range = None
|
||||
# self.begin_line = 0
|
||||
# self.begin_char = 0
|
||||
|
||||
|
||||
def convert_to_linklist(text, mask):
|
||||
root = LinkedListNode("", preserve=True)
|
||||
current_node = root
|
||||
for c, m, i in zip(text, mask, range(len(text))):
|
||||
if (m == PRESERVE and current_node.preserve) or (
|
||||
m == TRANSFORM and not current_node.preserve
|
||||
):
|
||||
# add
|
||||
current_node.string += c
|
||||
else:
|
||||
current_node.next = LinkedListNode(c, preserve=(m == PRESERVE))
|
||||
current_node = current_node.next
|
||||
return root
|
||||
|
||||
|
||||
def post_process(root):
|
||||
# 修复括号
|
||||
node = root
|
||||
while True:
|
||||
string = node.string
|
||||
if node.preserve:
|
||||
node = node.next
|
||||
if node is None:
|
||||
break
|
||||
continue
|
||||
|
||||
def break_check(string):
|
||||
str_stack = [""] # (lv, index)
|
||||
for i, c in enumerate(string):
|
||||
if c == "{":
|
||||
str_stack.append("{")
|
||||
elif c == "}":
|
||||
if len(str_stack) == 1:
|
||||
logger.warning("fixing brace error")
|
||||
return i
|
||||
str_stack.pop(-1)
|
||||
else:
|
||||
str_stack[-1] += c
|
||||
return -1
|
||||
|
||||
bp = break_check(string)
|
||||
|
||||
if bp == -1:
|
||||
pass
|
||||
elif bp == 0:
|
||||
node.string = string[:1]
|
||||
q = LinkedListNode(string[1:], False)
|
||||
q.next = node.next
|
||||
node.next = q
|
||||
else:
|
||||
node.string = string[:bp]
|
||||
q = LinkedListNode(string[bp:], False)
|
||||
q.next = node.next
|
||||
node.next = q
|
||||
|
||||
node = node.next
|
||||
if node is None:
|
||||
break
|
||||
|
||||
# 屏蔽空行和太短的句子
|
||||
node = root
|
||||
while True:
|
||||
if len(node.string.strip("\n").strip("")) == 0:
|
||||
node.preserve = True
|
||||
if len(node.string.strip("\n").strip("")) < 42:
|
||||
node.preserve = True
|
||||
node = node.next
|
||||
if node is None:
|
||||
break
|
||||
node = root
|
||||
while True:
|
||||
if node.next and node.preserve and node.next.preserve:
|
||||
node.string += node.next.string
|
||||
node.next = node.next.next
|
||||
node = node.next
|
||||
if node is None:
|
||||
break
|
||||
|
||||
# 将前后断行符脱离
|
||||
node = root
|
||||
prev_node = None
|
||||
while True:
|
||||
if not node.preserve:
|
||||
lstriped_ = node.string.lstrip().lstrip("\n")
|
||||
if (
|
||||
(prev_node is not None)
|
||||
and (prev_node.preserve)
|
||||
and (len(lstriped_) != len(node.string))
|
||||
):
|
||||
prev_node.string += node.string[: -len(lstriped_)]
|
||||
node.string = lstriped_
|
||||
rstriped_ = node.string.rstrip().rstrip("\n")
|
||||
if (
|
||||
(node.next is not None)
|
||||
and (node.next.preserve)
|
||||
and (len(rstriped_) != len(node.string))
|
||||
):
|
||||
node.next.string = node.string[len(rstriped_) :] + node.next.string
|
||||
node.string = rstriped_
|
||||
# =-=-=
|
||||
prev_node = node
|
||||
node = node.next
|
||||
if node is None:
|
||||
break
|
||||
|
||||
# 标注节点的行数范围
|
||||
node = root
|
||||
n_line = 0
|
||||
expansion = 2
|
||||
while True:
|
||||
n_l = node.string.count("\n")
|
||||
node.range = [n_line - expansion, n_line + n_l + expansion] # 失败时,扭转的范围
|
||||
n_line = n_line + n_l
|
||||
node = node.next
|
||||
if node is None:
|
||||
break
|
||||
return root
|
||||
|
||||
|
||||
"""
|
||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
Latex segmentation with a binary mask (PRESERVE=0, TRANSFORM=1)
|
||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
"""
|
||||
|
||||
|
||||
def set_forbidden_text(text, mask, pattern, flags=0):
|
||||
"""
|
||||
Add a preserve text area in this paper
|
||||
e.g. with pattern = r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}"
|
||||
you can mask out (mask = PRESERVE so that text become untouchable for GPT)
|
||||
everything between "\begin{equation}" and "\end{equation}"
|
||||
"""
|
||||
if isinstance(pattern, list):
|
||||
pattern = "|".join(pattern)
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
for res in pattern_compile.finditer(text):
|
||||
mask[res.span()[0] : res.span()[1]] = PRESERVE
|
||||
return text, mask
|
||||
|
||||
|
||||
def reverse_forbidden_text(text, mask, pattern, flags=0, forbid_wrapper=True):
|
||||
"""
|
||||
Move area out of preserve area (make text editable for GPT)
|
||||
count the number of the braces so as to catch complete text area.
|
||||
e.g.
|
||||
\begin{abstract} blablablablablabla. \end{abstract}
|
||||
"""
|
||||
if isinstance(pattern, list):
|
||||
pattern = "|".join(pattern)
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
for res in pattern_compile.finditer(text):
|
||||
if not forbid_wrapper:
|
||||
mask[res.span()[0] : res.span()[1]] = TRANSFORM
|
||||
else:
|
||||
mask[res.regs[0][0] : res.regs[1][0]] = PRESERVE # '\\begin{abstract}'
|
||||
mask[res.regs[1][0] : res.regs[1][1]] = TRANSFORM # abstract
|
||||
mask[res.regs[1][1] : res.regs[0][1]] = PRESERVE # abstract
|
||||
return text, mask
|
||||
|
||||
|
||||
def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
|
||||
"""
|
||||
Add a preserve text area in this paper (text become untouchable for GPT).
|
||||
count the number of the braces so as to catch complete text area.
|
||||
e.g.
|
||||
\caption{blablablablabla\texbf{blablabla}blablabla.}
|
||||
"""
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
for res in pattern_compile.finditer(text):
|
||||
brace_level = -1
|
||||
p = begin = end = res.regs[0][0]
|
||||
for _ in range(1024 * 16):
|
||||
if text[p] == "}" and brace_level == 0:
|
||||
break
|
||||
elif text[p] == "}":
|
||||
brace_level -= 1
|
||||
elif text[p] == "{":
|
||||
brace_level += 1
|
||||
p += 1
|
||||
end = p + 1
|
||||
mask[begin:end] = PRESERVE
|
||||
return text, mask
|
||||
|
||||
|
||||
def reverse_forbidden_text_careful_brace(
|
||||
text, mask, pattern, flags=0, forbid_wrapper=True
|
||||
):
|
||||
"""
|
||||
Move area out of preserve area (make text editable for GPT)
|
||||
count the number of the braces so as to catch complete text area.
|
||||
e.g.
|
||||
\caption{blablablablabla\texbf{blablabla}blablabla.}
|
||||
"""
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
for res in pattern_compile.finditer(text):
|
||||
brace_level = 0
|
||||
p = begin = end = res.regs[1][0]
|
||||
for _ in range(1024 * 16):
|
||||
if text[p] == "}" and brace_level == 0:
|
||||
break
|
||||
elif text[p] == "}":
|
||||
brace_level -= 1
|
||||
elif text[p] == "{":
|
||||
brace_level += 1
|
||||
p += 1
|
||||
end = p
|
||||
mask[begin:end] = TRANSFORM
|
||||
if forbid_wrapper:
|
||||
mask[res.regs[0][0] : begin] = PRESERVE
|
||||
mask[end : res.regs[0][1]] = PRESERVE
|
||||
return text, mask
|
||||
|
||||
|
||||
def set_forbidden_text_begin_end(text, mask, pattern, flags=0, limit_n_lines=42):
|
||||
"""
|
||||
Find all \begin{} ... \end{} text block that with less than limit_n_lines lines.
|
||||
Add it to preserve area
|
||||
"""
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
|
||||
def search_with_line_limit(text, mask):
|
||||
for res in pattern_compile.finditer(text):
|
||||
cmd = res.group(1) # begin{what}
|
||||
this = res.group(2) # content between begin and end
|
||||
this_mask = mask[res.regs[2][0] : res.regs[2][1]]
|
||||
white_list = [
|
||||
"document",
|
||||
"abstract",
|
||||
"lemma",
|
||||
"definition",
|
||||
"sproof",
|
||||
"em",
|
||||
"emph",
|
||||
"textit",
|
||||
"textbf",
|
||||
"itemize",
|
||||
"enumerate",
|
||||
]
|
||||
if (cmd in white_list) or this.count(
|
||||
"\n"
|
||||
) >= limit_n_lines: # use a magical number 42
|
||||
this, this_mask = search_with_line_limit(this, this_mask)
|
||||
mask[res.regs[2][0] : res.regs[2][1]] = this_mask
|
||||
else:
|
||||
mask[res.regs[0][0] : res.regs[0][1]] = PRESERVE
|
||||
return text, mask
|
||||
|
||||
return search_with_line_limit(text, mask)
|
||||
|
||||
|
||||
"""
|
||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
Latex Merge File
|
||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
"""
|
||||
|
||||
|
||||
def find_main_tex_file(file_manifest, mode):
|
||||
"""
|
||||
在多Tex文档中,寻找主文件,必须包含documentclass,返回找到的第一个。
|
||||
P.S. 但愿没人把latex模板放在里面传进来 (6.25 加入判定latex模板的代码)
|
||||
"""
|
||||
candidates = []
|
||||
for texf in file_manifest:
|
||||
if os.path.basename(texf).startswith("merge"):
|
||||
continue
|
||||
with open(texf, "r", encoding="utf8", errors="ignore") as f:
|
||||
file_content = f.read()
|
||||
if r"\documentclass" in file_content:
|
||||
candidates.append(texf)
|
||||
else:
|
||||
continue
|
||||
|
||||
if len(candidates) == 0:
|
||||
raise RuntimeError("无法找到一个主Tex文件(包含documentclass关键字)")
|
||||
elif len(candidates) == 1:
|
||||
return candidates[0]
|
||||
else: # if len(candidates) >= 2 通过一些Latex模板中常见(但通常不会出现在正文)的单词,对不同latex源文件扣分,取评分最高者返回
|
||||
candidates_score = []
|
||||
# 给出一些判定模板文档的词作为扣分项
|
||||
unexpected_words = [
|
||||
"\\LaTeX",
|
||||
"manuscript",
|
||||
"Guidelines",
|
||||
"font",
|
||||
"citations",
|
||||
"rejected",
|
||||
"blind review",
|
||||
"reviewers",
|
||||
]
|
||||
expected_words = ["\\input", "\\ref", "\\cite"]
|
||||
for texf in candidates:
|
||||
candidates_score.append(0)
|
||||
with open(texf, "r", encoding="utf8", errors="ignore") as f:
|
||||
file_content = f.read()
|
||||
file_content = rm_comments(file_content)
|
||||
for uw in unexpected_words:
|
||||
if uw in file_content:
|
||||
candidates_score[-1] -= 1
|
||||
for uw in expected_words:
|
||||
if uw in file_content:
|
||||
candidates_score[-1] += 1
|
||||
select = np.argmax(candidates_score) # 取评分最高者返回
|
||||
return candidates[select]
|
||||
|
||||
|
||||
def rm_comments(main_file):
|
||||
new_file_remove_comment_lines = []
|
||||
for l in main_file.splitlines():
|
||||
# 删除整行的空注释
|
||||
if l.lstrip().startswith("%"):
|
||||
pass
|
||||
else:
|
||||
new_file_remove_comment_lines.append(l)
|
||||
main_file = "\n".join(new_file_remove_comment_lines)
|
||||
# main_file = re.sub(r"\\include{(.*?)}", r"\\input{\1}", main_file) # 将 \include 命令转换为 \input 命令
|
||||
main_file = re.sub(r"(?<!\\)%.*", "", main_file) # 使用正则表达式查找半行注释, 并替换为空字符串
|
||||
return main_file
|
||||
|
||||
|
||||
def find_tex_file_ignore_case(fp):
|
||||
dir_name = os.path.dirname(fp)
|
||||
base_name = os.path.basename(fp)
|
||||
# 如果输入的文件路径是正确的
|
||||
if os.path.isfile(pj(dir_name, base_name)):
|
||||
return pj(dir_name, base_name)
|
||||
# 如果不正确,试着加上.tex后缀试试
|
||||
if not base_name.endswith(".tex"):
|
||||
base_name += ".tex"
|
||||
if os.path.isfile(pj(dir_name, base_name)):
|
||||
return pj(dir_name, base_name)
|
||||
# 如果还找不到,解除大小写限制,再试一次
|
||||
import glob
|
||||
|
||||
for f in glob.glob(dir_name + "/*.tex"):
|
||||
base_name_s = os.path.basename(fp)
|
||||
base_name_f = os.path.basename(f)
|
||||
if base_name_s.lower() == base_name_f.lower():
|
||||
return f
|
||||
# 试着加上.tex后缀试试
|
||||
if not base_name_s.endswith(".tex"):
|
||||
base_name_s += ".tex"
|
||||
if base_name_s.lower() == base_name_f.lower():
|
||||
return f
|
||||
return None
|
||||
|
||||
|
||||
def merge_tex_files_(project_foler, main_file, mode):
|
||||
"""
|
||||
Merge Tex project recursively
|
||||
"""
|
||||
main_file = rm_comments(main_file)
|
||||
for s in reversed([q for q in re.finditer(r"\\input\{(.*?)\}", main_file, re.M)]):
|
||||
f = s.group(1)
|
||||
fp = os.path.join(project_foler, f)
|
||||
fp_ = find_tex_file_ignore_case(fp)
|
||||
if fp_:
|
||||
try:
|
||||
with open(fp_, "r", encoding="utf-8", errors="replace") as fx:
|
||||
c = fx.read()
|
||||
except:
|
||||
c = f"\n\nWarning from GPT-Academic: LaTex source file is missing!\n\n"
|
||||
else:
|
||||
raise RuntimeError(f"找不到{fp},Tex源文件缺失!")
|
||||
c = merge_tex_files_(project_foler, c, mode)
|
||||
main_file = main_file[: s.span()[0]] + c + main_file[s.span()[1] :]
|
||||
return main_file
|
||||
|
||||
|
||||
def find_title_and_abs(main_file):
|
||||
def extract_abstract_1(text):
|
||||
pattern = r"\\abstract\{(.*?)\}"
|
||||
match = re.search(pattern, text, re.DOTALL)
|
||||
if match:
|
||||
return match.group(1)
|
||||
else:
|
||||
return None
|
||||
|
||||
def extract_abstract_2(text):
|
||||
pattern = r"\\begin\{abstract\}(.*?)\\end\{abstract\}"
|
||||
match = re.search(pattern, text, re.DOTALL)
|
||||
if match:
|
||||
return match.group(1)
|
||||
else:
|
||||
return None
|
||||
|
||||
def extract_title(string):
|
||||
pattern = r"\\title\{(.*?)\}"
|
||||
match = re.search(pattern, string, re.DOTALL)
|
||||
|
||||
if match:
|
||||
return match.group(1)
|
||||
else:
|
||||
return None
|
||||
|
||||
abstract = extract_abstract_1(main_file)
|
||||
if abstract is None:
|
||||
abstract = extract_abstract_2(main_file)
|
||||
title = extract_title(main_file)
|
||||
return title, abstract
|
||||
|
||||
|
||||
def merge_tex_files(project_foler, main_file, mode):
|
||||
"""
|
||||
Merge Tex project recursively
|
||||
P.S. 顺便把CTEX塞进去以支持中文
|
||||
P.S. 顺便把Latex的注释去除
|
||||
"""
|
||||
main_file = merge_tex_files_(project_foler, main_file, mode)
|
||||
main_file = rm_comments(main_file)
|
||||
|
||||
if mode == "translate_zh":
|
||||
# find paper documentclass
|
||||
pattern = re.compile(r"\\documentclass.*\n")
|
||||
match = pattern.search(main_file)
|
||||
assert match is not None, "Cannot find documentclass statement!"
|
||||
position = match.end()
|
||||
add_ctex = "\\usepackage{ctex}\n"
|
||||
add_url = "\\usepackage{url}\n" if "{url}" not in main_file else ""
|
||||
main_file = main_file[:position] + add_ctex + add_url + main_file[position:]
|
||||
# fontset=windows
|
||||
import platform
|
||||
|
||||
main_file = re.sub(
|
||||
r"\\documentclass\[(.*?)\]{(.*?)}",
|
||||
r"\\documentclass[\1,fontset=windows,UTF8]{\2}",
|
||||
main_file,
|
||||
)
|
||||
main_file = re.sub(
|
||||
r"\\documentclass{(.*?)}",
|
||||
r"\\documentclass[fontset=windows,UTF8]{\1}",
|
||||
main_file,
|
||||
)
|
||||
# find paper abstract
|
||||
pattern_opt1 = re.compile(r"\\begin\{abstract\}.*\n")
|
||||
pattern_opt2 = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
|
||||
match_opt1 = pattern_opt1.search(main_file)
|
||||
match_opt2 = pattern_opt2.search(main_file)
|
||||
if (match_opt1 is None) and (match_opt2 is None):
|
||||
# "Cannot find paper abstract section!"
|
||||
main_file = insert_abstract(main_file)
|
||||
match_opt1 = pattern_opt1.search(main_file)
|
||||
match_opt2 = pattern_opt2.search(main_file)
|
||||
assert (match_opt1 is not None) or (
|
||||
match_opt2 is not None
|
||||
), "Cannot find paper abstract section!"
|
||||
return main_file
|
||||
|
||||
|
||||
insert_missing_abs_str = r"""
|
||||
\begin{abstract}
|
||||
The GPT-Academic program cannot find abstract section in this paper.
|
||||
\end{abstract}
|
||||
"""
|
||||
|
||||
|
||||
def insert_abstract(tex_content):
|
||||
if "\\maketitle" in tex_content:
|
||||
# find the position of "\maketitle"
|
||||
find_index = tex_content.index("\\maketitle")
|
||||
# find the nearest ending line
|
||||
end_line_index = tex_content.find("\n", find_index)
|
||||
# insert "abs_str" on the next line
|
||||
modified_tex = (
|
||||
tex_content[: end_line_index + 1]
|
||||
+ "\n\n"
|
||||
+ insert_missing_abs_str
|
||||
+ "\n\n"
|
||||
+ tex_content[end_line_index + 1 :]
|
||||
)
|
||||
return modified_tex
|
||||
elif r"\begin{document}" in tex_content:
|
||||
# find the position of "\maketitle"
|
||||
find_index = tex_content.index(r"\begin{document}")
|
||||
# find the nearest ending line
|
||||
end_line_index = tex_content.find("\n", find_index)
|
||||
# insert "abs_str" on the next line
|
||||
modified_tex = (
|
||||
tex_content[: end_line_index + 1]
|
||||
+ "\n\n"
|
||||
+ insert_missing_abs_str
|
||||
+ "\n\n"
|
||||
+ tex_content[end_line_index + 1 :]
|
||||
)
|
||||
return modified_tex
|
||||
else:
|
||||
return tex_content
|
||||
|
||||
|
||||
"""
|
||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
Post process
|
||||
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||
"""
|
||||
|
||||
|
||||
def mod_inbraket(match):
|
||||
"""
|
||||
为啥chatgpt会把cite里面的逗号换成中文逗号呀
|
||||
"""
|
||||
# get the matched string
|
||||
cmd = match.group(1)
|
||||
str_to_modify = match.group(2)
|
||||
# modify the matched string
|
||||
str_to_modify = str_to_modify.replace(":", ":") # 前面是中文冒号,后面是英文冒号
|
||||
str_to_modify = str_to_modify.replace(",", ",") # 前面是中文逗号,后面是英文逗号
|
||||
# str_to_modify = 'BOOM'
|
||||
return "\\" + cmd + "{" + str_to_modify + "}"
|
||||
|
||||
|
||||
def fix_content(final_tex, node_string):
|
||||
"""
|
||||
Fix common GPT errors to increase success rate
|
||||
"""
|
||||
final_tex = re.sub(r"(?<!\\)%", "\\%", final_tex)
|
||||
final_tex = re.sub(r"\\([a-z]{2,10})\ \{", r"\\\1{", string=final_tex)
|
||||
final_tex = re.sub(r"\\\ ([a-z]{2,10})\{", r"\\\1{", string=final_tex)
|
||||
final_tex = re.sub(r"\\([a-z]{2,10})\{([^\}]*?)\}", mod_inbraket, string=final_tex)
|
||||
|
||||
if "Traceback" in final_tex and "[Local Message]" in final_tex:
|
||||
final_tex = node_string # 出问题了,还原原文
|
||||
if node_string.count("\\begin") != final_tex.count("\\begin"):
|
||||
final_tex = node_string # 出问题了,还原原文
|
||||
if node_string.count("\_") > 0 and node_string.count("\_") > final_tex.count("\_"):
|
||||
# walk and replace any _ without \
|
||||
final_tex = re.sub(r"(?<!\\)_", "\\_", final_tex)
|
||||
|
||||
def compute_brace_level(string):
|
||||
# this function count the number of { and }
|
||||
brace_level = 0
|
||||
for c in string:
|
||||
if c == "{":
|
||||
brace_level += 1
|
||||
elif c == "}":
|
||||
brace_level -= 1
|
||||
return brace_level
|
||||
|
||||
def join_most(tex_t, tex_o):
|
||||
# this function join translated string and original string when something goes wrong
|
||||
p_t = 0
|
||||
p_o = 0
|
||||
|
||||
def find_next(string, chars, begin):
|
||||
p = begin
|
||||
while p < len(string):
|
||||
if string[p] in chars:
|
||||
return p, string[p]
|
||||
p += 1
|
||||
return None, None
|
||||
|
||||
while True:
|
||||
res1, char = find_next(tex_o, ["{", "}"], p_o)
|
||||
if res1 is None:
|
||||
break
|
||||
res2, char = find_next(tex_t, [char], p_t)
|
||||
if res2 is None:
|
||||
break
|
||||
p_o = res1 + 1
|
||||
p_t = res2 + 1
|
||||
return tex_t[:p_t] + tex_o[p_o:]
|
||||
|
||||
if compute_brace_level(final_tex) != compute_brace_level(node_string):
|
||||
# 出问题了,还原部分原文,保证括号正确
|
||||
final_tex = join_most(final_tex, node_string)
|
||||
return final_tex
|
||||
|
||||
|
||||
def compile_latex_with_timeout(command, cwd, timeout=60):
|
||||
import subprocess
|
||||
|
||||
process = subprocess.Popen(
|
||||
command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd
|
||||
)
|
||||
try:
|
||||
stdout, stderr = process.communicate(timeout=timeout)
|
||||
except subprocess.TimeoutExpired:
|
||||
process.kill()
|
||||
stdout, stderr = process.communicate()
|
||||
logger.error("Process timed out (compile_latex_with_timeout)!")
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def run_in_subprocess_wrapper_func(func, args, kwargs, return_dict, exception_dict):
|
||||
import sys
|
||||
|
||||
try:
|
||||
result = func(*args, **kwargs)
|
||||
return_dict["result"] = result
|
||||
except Exception as e:
|
||||
exc_info = sys.exc_info()
|
||||
exception_dict["exception"] = exc_info
|
||||
|
||||
|
||||
def run_in_subprocess(func):
|
||||
import multiprocessing
|
||||
|
||||
def wrapper(*args, **kwargs):
|
||||
return_dict = multiprocessing.Manager().dict()
|
||||
exception_dict = multiprocessing.Manager().dict()
|
||||
process = multiprocessing.Process(
|
||||
target=run_in_subprocess_wrapper_func,
|
||||
args=(func, args, kwargs, return_dict, exception_dict),
|
||||
)
|
||||
process.start()
|
||||
process.join()
|
||||
process.close()
|
||||
if "exception" in exception_dict:
|
||||
# ooops, the subprocess ran into an exception
|
||||
exc_info = exception_dict["exception"]
|
||||
raise exc_info[1].with_traceback(exc_info[2])
|
||||
if "result" in return_dict.keys():
|
||||
# If the subprocess ran successfully, return the result
|
||||
return return_dict["result"]
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
def _merge_pdfs(pdf1_path, pdf2_path, output_path):
|
||||
try:
|
||||
logger.info("Merging PDFs using _merge_pdfs_ng")
|
||||
_merge_pdfs_ng(pdf1_path, pdf2_path, output_path)
|
||||
except:
|
||||
logger.info("Merging PDFs using _merge_pdfs_legacy")
|
||||
_merge_pdfs_legacy(pdf1_path, pdf2_path, output_path)
|
||||
|
||||
|
||||
def _merge_pdfs_ng(pdf1_path, pdf2_path, output_path):
|
||||
import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
||||
from PyPDF2.generic import NameObject, TextStringObject, ArrayObject, FloatObject, NumberObject
|
||||
|
||||
Percent = 1
|
||||
# raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
|
||||
# Open the first PDF file
|
||||
with open(pdf1_path, "rb") as pdf1_file:
|
||||
pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
|
||||
# Open the second PDF file
|
||||
with open(pdf2_path, "rb") as pdf2_file:
|
||||
pdf2_reader = PyPDF2.PdfFileReader(pdf2_file)
|
||||
# Create a new PDF file to store the merged pages
|
||||
output_writer = PyPDF2.PdfFileWriter()
|
||||
# Determine the number of pages in each PDF file
|
||||
num_pages = max(pdf1_reader.numPages, pdf2_reader.numPages)
|
||||
# Merge the pages from the two PDF files
|
||||
for page_num in range(num_pages):
|
||||
# Add the page from the first PDF file
|
||||
if page_num < pdf1_reader.numPages:
|
||||
page1 = pdf1_reader.getPage(page_num)
|
||||
else:
|
||||
page1 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
||||
# Add the page from the second PDF file
|
||||
if page_num < pdf2_reader.numPages:
|
||||
page2 = pdf2_reader.getPage(page_num)
|
||||
else:
|
||||
page2 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
||||
# Create a new empty page with double width
|
||||
new_page = PyPDF2.PageObject.createBlankPage(
|
||||
width=int(
|
||||
int(page1.mediaBox.getWidth())
|
||||
+ int(page2.mediaBox.getWidth()) * Percent
|
||||
),
|
||||
height=max(page1.mediaBox.getHeight(), page2.mediaBox.getHeight()),
|
||||
)
|
||||
new_page.mergeTranslatedPage(page1, 0, 0)
|
||||
new_page.mergeTranslatedPage(
|
||||
page2,
|
||||
int(
|
||||
int(page1.mediaBox.getWidth())
|
||||
- int(page2.mediaBox.getWidth()) * (1 - Percent)
|
||||
),
|
||||
0,
|
||||
)
|
||||
if "/Annots" in new_page:
|
||||
annotations = new_page["/Annots"]
|
||||
for i, annot in enumerate(annotations):
|
||||
annot_obj = annot.get_object()
|
||||
|
||||
# 检查注释类型是否是链接(/Link)
|
||||
if annot_obj.get("/Subtype") == "/Link":
|
||||
# 检查是否为内部链接跳转(/GoTo)或外部URI链接(/URI)
|
||||
action = annot_obj.get("/A")
|
||||
if action:
|
||||
|
||||
if "/S" in action and action["/S"] == "/GoTo":
|
||||
# 内部链接:跳转到文档中的某个页面
|
||||
dest = action.get("/D") # 目标页或目标位置
|
||||
# if dest and annot.idnum in page2_annot_id:
|
||||
# if dest in pdf2_reader.named_destinations:
|
||||
if dest and page2.annotations:
|
||||
if annot in page2.annotations:
|
||||
# 获取原始文件中跳转信息,包括跳转页面
|
||||
destination = pdf2_reader.named_destinations[
|
||||
dest
|
||||
]
|
||||
page_number = (
|
||||
pdf2_reader.get_destination_page_number(
|
||||
destination
|
||||
)
|
||||
)
|
||||
# 更新跳转信息,跳转到对应的页面和,指定坐标 (100, 150),缩放比例为 100%
|
||||
# “/D”:[10,'/XYZ',100,100,0]
|
||||
if destination.dest_array[1] == "/XYZ":
|
||||
annot_obj["/A"].update(
|
||||
{
|
||||
NameObject("/D"): ArrayObject(
|
||||
[
|
||||
NumberObject(page_number),
|
||||
destination.dest_array[1],
|
||||
FloatObject(
|
||||
destination.dest_array[
|
||||
2
|
||||
]
|
||||
+ int(
|
||||
page1.mediaBox.getWidth()
|
||||
)
|
||||
),
|
||||
destination.dest_array[3],
|
||||
destination.dest_array[4],
|
||||
]
|
||||
) # 确保键和值是 PdfObject
|
||||
}
|
||||
)
|
||||
else:
|
||||
annot_obj["/A"].update(
|
||||
{
|
||||
NameObject("/D"): ArrayObject(
|
||||
[
|
||||
NumberObject(page_number),
|
||||
destination.dest_array[1],
|
||||
]
|
||||
) # 确保键和值是 PdfObject
|
||||
}
|
||||
)
|
||||
|
||||
rect = annot_obj.get("/Rect")
|
||||
# 更新点击坐标
|
||||
rect = ArrayObject(
|
||||
[
|
||||
FloatObject(
|
||||
rect[0]
|
||||
+ int(page1.mediaBox.getWidth())
|
||||
),
|
||||
rect[1],
|
||||
FloatObject(
|
||||
rect[2]
|
||||
+ int(page1.mediaBox.getWidth())
|
||||
),
|
||||
rect[3],
|
||||
]
|
||||
)
|
||||
annot_obj.update(
|
||||
{
|
||||
NameObject(
|
||||
"/Rect"
|
||||
): rect # 确保键和值是 PdfObject
|
||||
}
|
||||
)
|
||||
# if dest and annot.idnum in page1_annot_id:
|
||||
# if dest in pdf1_reader.named_destinations:
|
||||
if dest and page1.annotations:
|
||||
if annot in page1.annotations:
|
||||
# 获取原始文件中跳转信息,包括跳转页面
|
||||
destination = pdf1_reader.named_destinations[
|
||||
dest
|
||||
]
|
||||
page_number = (
|
||||
pdf1_reader.get_destination_page_number(
|
||||
destination
|
||||
)
|
||||
)
|
||||
# 更新跳转信息,跳转到对应的页面和,指定坐标 (100, 150),缩放比例为 100%
|
||||
# “/D”:[10,'/XYZ',100,100,0]
|
||||
if destination.dest_array[1] == "/XYZ":
|
||||
annot_obj["/A"].update(
|
||||
{
|
||||
NameObject("/D"): ArrayObject(
|
||||
[
|
||||
NumberObject(page_number),
|
||||
destination.dest_array[1],
|
||||
FloatObject(
|
||||
destination.dest_array[
|
||||
2
|
||||
]
|
||||
),
|
||||
destination.dest_array[3],
|
||||
destination.dest_array[4],
|
||||
]
|
||||
) # 确保键和值是 PdfObject
|
||||
}
|
||||
)
|
||||
else:
|
||||
annot_obj["/A"].update(
|
||||
{
|
||||
NameObject("/D"): ArrayObject(
|
||||
[
|
||||
NumberObject(page_number),
|
||||
destination.dest_array[1],
|
||||
]
|
||||
) # 确保键和值是 PdfObject
|
||||
}
|
||||
)
|
||||
|
||||
rect = annot_obj.get("/Rect")
|
||||
rect = ArrayObject(
|
||||
[
|
||||
FloatObject(rect[0]),
|
||||
rect[1],
|
||||
FloatObject(rect[2]),
|
||||
rect[3],
|
||||
]
|
||||
)
|
||||
annot_obj.update(
|
||||
{
|
||||
NameObject(
|
||||
"/Rect"
|
||||
): rect # 确保键和值是 PdfObject
|
||||
}
|
||||
)
|
||||
|
||||
elif "/S" in action and action["/S"] == "/URI":
|
||||
# 外部链接:跳转到某个URI
|
||||
uri = action.get("/URI")
|
||||
output_writer.addPage(new_page)
|
||||
# Save the merged PDF file
|
||||
with open(output_path, "wb") as output_file:
|
||||
output_writer.write(output_file)
|
||||
|
||||
|
||||
def _merge_pdfs_legacy(pdf1_path, pdf2_path, output_path):
|
||||
import PyPDF2 # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
||||
|
||||
Percent = 0.95
|
||||
# raise RuntimeError('PyPDF2 has a serious memory leak problem, please use other tools to merge PDF files.')
|
||||
# Open the first PDF file
|
||||
with open(pdf1_path, "rb") as pdf1_file:
|
||||
pdf1_reader = PyPDF2.PdfFileReader(pdf1_file)
|
||||
# Open the second PDF file
|
||||
with open(pdf2_path, "rb") as pdf2_file:
|
||||
pdf2_reader = PyPDF2.PdfFileReader(pdf2_file)
|
||||
# Create a new PDF file to store the merged pages
|
||||
output_writer = PyPDF2.PdfFileWriter()
|
||||
# Determine the number of pages in each PDF file
|
||||
num_pages = max(pdf1_reader.numPages, pdf2_reader.numPages)
|
||||
# Merge the pages from the two PDF files
|
||||
for page_num in range(num_pages):
|
||||
# Add the page from the first PDF file
|
||||
if page_num < pdf1_reader.numPages:
|
||||
page1 = pdf1_reader.getPage(page_num)
|
||||
else:
|
||||
page1 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
||||
# Add the page from the second PDF file
|
||||
if page_num < pdf2_reader.numPages:
|
||||
page2 = pdf2_reader.getPage(page_num)
|
||||
else:
|
||||
page2 = PyPDF2.PageObject.createBlankPage(pdf1_reader)
|
||||
# Create a new empty page with double width
|
||||
new_page = PyPDF2.PageObject.createBlankPage(
|
||||
width=int(
|
||||
int(page1.mediaBox.getWidth())
|
||||
+ int(page2.mediaBox.getWidth()) * Percent
|
||||
),
|
||||
height=max(page1.mediaBox.getHeight(), page2.mediaBox.getHeight()),
|
||||
)
|
||||
new_page.mergeTranslatedPage(page1, 0, 0)
|
||||
new_page.mergeTranslatedPage(
|
||||
page2,
|
||||
int(
|
||||
int(page1.mediaBox.getWidth())
|
||||
- int(page2.mediaBox.getWidth()) * (1 - Percent)
|
||||
),
|
||||
0,
|
||||
)
|
||||
output_writer.addPage(new_page)
|
||||
# Save the merged PDF file
|
||||
with open(output_path, "wb") as output_file:
|
||||
output_writer.write(output_file)
|
||||
|
||||
|
||||
merge_pdfs = run_in_subprocess(_merge_pdfs) # PyPDF2这个库有严重的内存泄露问题,把它放到子进程中运行,从而方便内存的释放
|
||||
@@ -1,773 +0,0 @@
|
||||
from toolbox import update_ui, update_ui_lastest_msg # 刷新Gradio前端界面
|
||||
from toolbox import zip_folder, objdump, objload, promote_file_to_downloadzone
|
||||
import os, shutil
|
||||
import re
|
||||
import numpy as np
|
||||
pj = os.path.join
|
||||
|
||||
"""
|
||||
========================================================================
|
||||
Part One
|
||||
Latex segmentation with a binary mask (PRESERVE=0, TRANSFORM=1)
|
||||
========================================================================
|
||||
"""
|
||||
PRESERVE = 0
|
||||
TRANSFORM = 1
|
||||
|
||||
def set_forbidden_text(text, mask, pattern, flags=0):
|
||||
"""
|
||||
Add a preserve text area in this paper
|
||||
e.g. with pattern = r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}"
|
||||
you can mask out (mask = PRESERVE so that text become untouchable for GPT)
|
||||
everything between "\begin{equation}" and "\end{equation}"
|
||||
"""
|
||||
if isinstance(pattern, list): pattern = '|'.join(pattern)
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
for res in pattern_compile.finditer(text):
|
||||
mask[res.span()[0]:res.span()[1]] = PRESERVE
|
||||
return text, mask
|
||||
|
||||
def set_forbidden_text_careful_brace(text, mask, pattern, flags=0):
|
||||
"""
|
||||
Add a preserve text area in this paper (text become untouchable for GPT).
|
||||
count the number of the braces so as to catch compelete text area.
|
||||
e.g.
|
||||
\caption{blablablablabla\texbf{blablabla}blablabla.}
|
||||
"""
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
for res in pattern_compile.finditer(text):
|
||||
brace_level = -1
|
||||
p = begin = end = res.regs[0][0]
|
||||
for _ in range(1024*16):
|
||||
if text[p] == '}' and brace_level == 0: break
|
||||
elif text[p] == '}': brace_level -= 1
|
||||
elif text[p] == '{': brace_level += 1
|
||||
p += 1
|
||||
end = p+1
|
||||
mask[begin:end] = PRESERVE
|
||||
return text, mask
|
||||
|
||||
def reverse_forbidden_text_careful_brace(text, mask, pattern, flags=0, forbid_wrapper=True):
|
||||
"""
|
||||
Move area out of preserve area (make text editable for GPT)
|
||||
count the number of the braces so as to catch compelete text area.
|
||||
e.g.
|
||||
\caption{blablablablabla\texbf{blablabla}blablabla.}
|
||||
"""
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
for res in pattern_compile.finditer(text):
|
||||
brace_level = 0
|
||||
p = begin = end = res.regs[1][0]
|
||||
for _ in range(1024*16):
|
||||
if text[p] == '}' and brace_level == 0: break
|
||||
elif text[p] == '}': brace_level -= 1
|
||||
elif text[p] == '{': brace_level += 1
|
||||
p += 1
|
||||
end = p
|
||||
mask[begin:end] = TRANSFORM
|
||||
if forbid_wrapper:
|
||||
mask[res.regs[0][0]:begin] = PRESERVE
|
||||
mask[end:res.regs[0][1]] = PRESERVE
|
||||
return text, mask
|
||||
|
||||
def set_forbidden_text_begin_end(text, mask, pattern, flags=0, limit_n_lines=42):
|
||||
"""
|
||||
Find all \begin{} ... \end{} text block that with less than limit_n_lines lines.
|
||||
Add it to preserve area
|
||||
"""
|
||||
pattern_compile = re.compile(pattern, flags)
|
||||
def search_with_line_limit(text, mask):
|
||||
for res in pattern_compile.finditer(text):
|
||||
cmd = res.group(1) # begin{what}
|
||||
this = res.group(2) # content between begin and end
|
||||
this_mask = mask[res.regs[2][0]:res.regs[2][1]]
|
||||
white_list = ['document', 'abstract', 'lemma', 'definition', 'sproof',
|
||||
'em', 'emph', 'textit', 'textbf', 'itemize', 'enumerate']
|
||||
if (cmd in white_list) or this.count('\n') >= limit_n_lines: # use a magical number 42
|
||||
this, this_mask = search_with_line_limit(this, this_mask)
|
||||
mask[res.regs[2][0]:res.regs[2][1]] = this_mask
|
||||
else:
|
||||
mask[res.regs[0][0]:res.regs[0][1]] = PRESERVE
|
||||
return text, mask
|
||||
return search_with_line_limit(text, mask)
|
||||
|
||||
class LinkedListNode():
|
||||
"""
|
||||
Linked List Node
|
||||
"""
|
||||
def __init__(self, string, preserve=True) -> None:
|
||||
self.string = string
|
||||
self.preserve = preserve
|
||||
self.next = None
|
||||
# self.begin_line = 0
|
||||
# self.begin_char = 0
|
||||
|
||||
def convert_to_linklist(text, mask):
|
||||
root = LinkedListNode("", preserve=True)
|
||||
current_node = root
|
||||
for c, m, i in zip(text, mask, range(len(text))):
|
||||
if (m==PRESERVE and current_node.preserve) \
|
||||
or (m==TRANSFORM and not current_node.preserve):
|
||||
# add
|
||||
current_node.string += c
|
||||
else:
|
||||
current_node.next = LinkedListNode(c, preserve=(m==PRESERVE))
|
||||
current_node = current_node.next
|
||||
return root
|
||||
"""
|
||||
========================================================================
|
||||
Latex Merge File
|
||||
========================================================================
|
||||
"""
|
||||
|
||||
def 寻找Latex主文件(file_manifest, mode):
|
||||
"""
|
||||
在多Tex文档中,寻找主文件,必须包含documentclass,返回找到的第一个。
|
||||
P.S. 但愿没人把latex模板放在里面传进来 (6.25 加入判定latex模板的代码)
|
||||
"""
|
||||
canidates = []
|
||||
for texf in file_manifest:
|
||||
if os.path.basename(texf).startswith('merge'):
|
||||
continue
|
||||
with open(texf, 'r', encoding='utf8') as f:
|
||||
file_content = f.read()
|
||||
if r'\documentclass' in file_content:
|
||||
canidates.append(texf)
|
||||
else:
|
||||
continue
|
||||
|
||||
if len(canidates) == 0:
|
||||
raise RuntimeError('无法找到一个主Tex文件(包含documentclass关键字)')
|
||||
elif len(canidates) == 1:
|
||||
return canidates[0]
|
||||
else: # if len(canidates) >= 2 通过一些Latex模板中常见(但通常不会出现在正文)的单词,对不同latex源文件扣分,取评分最高者返回
|
||||
canidates_score = []
|
||||
# 给出一些判定模板文档的词作为扣分项
|
||||
unexpected_words = ['\LaTeX', 'manuscript', 'Guidelines', 'font', 'citations', 'rejected', 'blind review', 'reviewers']
|
||||
expected_words = ['\input', '\ref', '\cite']
|
||||
for texf in canidates:
|
||||
canidates_score.append(0)
|
||||
with open(texf, 'r', encoding='utf8') as f:
|
||||
file_content = f.read()
|
||||
for uw in unexpected_words:
|
||||
if uw in file_content:
|
||||
canidates_score[-1] -= 1
|
||||
for uw in expected_words:
|
||||
if uw in file_content:
|
||||
canidates_score[-1] += 1
|
||||
select = np.argmax(canidates_score) # 取评分最高者返回
|
||||
return canidates[select]
|
||||
|
||||
def rm_comments(main_file):
|
||||
new_file_remove_comment_lines = []
|
||||
for l in main_file.splitlines():
|
||||
# 删除整行的空注释
|
||||
if l.lstrip().startswith("%"):
|
||||
pass
|
||||
else:
|
||||
new_file_remove_comment_lines.append(l)
|
||||
main_file = '\n'.join(new_file_remove_comment_lines)
|
||||
# main_file = re.sub(r"\\include{(.*?)}", r"\\input{\1}", main_file) # 将 \include 命令转换为 \input 命令
|
||||
main_file = re.sub(r'(?<!\\)%.*', '', main_file) # 使用正则表达式查找半行注释, 并替换为空字符串
|
||||
return main_file
|
||||
|
||||
def merge_tex_files_(project_foler, main_file, mode):
|
||||
"""
|
||||
Merge Tex project recrusively
|
||||
"""
|
||||
main_file = rm_comments(main_file)
|
||||
for s in reversed([q for q in re.finditer(r"\\input\{(.*?)\}", main_file, re.M)]):
|
||||
f = s.group(1)
|
||||
fp = os.path.join(project_foler, f)
|
||||
if os.path.exists(fp):
|
||||
# e.g., \input{srcs/07_appendix.tex}
|
||||
with open(fp, 'r', encoding='utf-8', errors='replace') as fx:
|
||||
c = fx.read()
|
||||
else:
|
||||
# e.g., \input{srcs/07_appendix}
|
||||
with open(fp+'.tex', 'r', encoding='utf-8', errors='replace') as fx:
|
||||
c = fx.read()
|
||||
c = merge_tex_files_(project_foler, c, mode)
|
||||
main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:]
|
||||
return main_file
|
||||
|
||||
def merge_tex_files(project_foler, main_file, mode):
|
||||
"""
|
||||
Merge Tex project recrusively
|
||||
P.S. 顺便把CTEX塞进去以支持中文
|
||||
P.S. 顺便把Latex的注释去除
|
||||
"""
|
||||
main_file = merge_tex_files_(project_foler, main_file, mode)
|
||||
main_file = rm_comments(main_file)
|
||||
|
||||
if mode == 'translate_zh':
|
||||
# find paper documentclass
|
||||
pattern = re.compile(r'\\documentclass.*\n')
|
||||
match = pattern.search(main_file)
|
||||
assert match is not None, "Cannot find documentclass statement!"
|
||||
position = match.end()
|
||||
add_ctex = '\\usepackage{ctex}\n'
|
||||
add_url = '\\usepackage{url}\n' if '{url}' not in main_file else ''
|
||||
main_file = main_file[:position] + add_ctex + add_url + main_file[position:]
|
||||
# fontset=windows
|
||||
import platform
|
||||
main_file = re.sub(r"\\documentclass\[(.*?)\]{(.*?)}", r"\\documentclass[\1,fontset=windows,UTF8]{\2}",main_file)
|
||||
main_file = re.sub(r"\\documentclass{(.*?)}", r"\\documentclass[fontset=windows,UTF8]{\1}",main_file)
|
||||
# find paper abstract
|
||||
pattern_opt1 = re.compile(r'\\begin\{abstract\}.*\n')
|
||||
pattern_opt2 = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
|
||||
match_opt1 = pattern_opt1.search(main_file)
|
||||
match_opt2 = pattern_opt2.search(main_file)
|
||||
assert (match_opt1 is not None) or (match_opt2 is not None), "Cannot find paper abstract section!"
|
||||
return main_file
|
||||
|
||||
|
||||
|
||||
"""
|
||||
========================================================================
|
||||
Post process
|
||||
========================================================================
|
||||
"""
|
||||
def mod_inbraket(match):
|
||||
"""
|
||||
为啥chatgpt会把cite里面的逗号换成中文逗号呀
|
||||
"""
|
||||
# get the matched string
|
||||
cmd = match.group(1)
|
||||
str_to_modify = match.group(2)
|
||||
# modify the matched string
|
||||
str_to_modify = str_to_modify.replace(':', ':') # 前面是中文冒号,后面是英文冒号
|
||||
str_to_modify = str_to_modify.replace(',', ',') # 前面是中文逗号,后面是英文逗号
|
||||
# str_to_modify = 'BOOM'
|
||||
return "\\" + cmd + "{" + str_to_modify + "}"
|
||||
|
||||
def fix_content(final_tex, node_string):
|
||||
"""
|
||||
Fix common GPT errors to increase success rate
|
||||
"""
|
||||
final_tex = re.sub(r"(?<!\\)%", "\\%", final_tex)
|
||||
final_tex = re.sub(r"\\([a-z]{2,10})\ \{", r"\\\1{", string=final_tex)
|
||||
final_tex = re.sub(r"\\\ ([a-z]{2,10})\{", r"\\\1{", string=final_tex)
|
||||
final_tex = re.sub(r"\\([a-z]{2,10})\{([^\}]*?)\}", mod_inbraket, string=final_tex)
|
||||
|
||||
if "Traceback" in final_tex and "[Local Message]" in final_tex:
|
||||
final_tex = node_string # 出问题了,还原原文
|
||||
if node_string.count('\\begin') != final_tex.count('\\begin'):
|
||||
final_tex = node_string # 出问题了,还原原文
|
||||
if node_string.count('\_') > 0 and node_string.count('\_') > final_tex.count('\_'):
|
||||
# walk and replace any _ without \
|
||||
final_tex = re.sub(r"(?<!\\)_", "\\_", final_tex)
|
||||
|
||||
def compute_brace_level(string):
|
||||
# this function count the number of { and }
|
||||
brace_level = 0
|
||||
for c in string:
|
||||
if c == "{": brace_level += 1
|
||||
elif c == "}": brace_level -= 1
|
||||
return brace_level
|
||||
def join_most(tex_t, tex_o):
|
||||
# this function join translated string and original string when something goes wrong
|
||||
p_t = 0
|
||||
p_o = 0
|
||||
def find_next(string, chars, begin):
|
||||
p = begin
|
||||
while p < len(string):
|
||||
if string[p] in chars: return p, string[p]
|
||||
p += 1
|
||||
return None, None
|
||||
while True:
|
||||
res1, char = find_next(tex_o, ['{','}'], p_o)
|
||||
if res1 is None: break
|
||||
res2, char = find_next(tex_t, [char], p_t)
|
||||
if res2 is None: break
|
||||
p_o = res1 + 1
|
||||
p_t = res2 + 1
|
||||
return tex_t[:p_t] + tex_o[p_o:]
|
||||
|
||||
if compute_brace_level(final_tex) != compute_brace_level(node_string):
|
||||
# 出问题了,还原部分原文,保证括号正确
|
||||
final_tex = join_most(final_tex, node_string)
|
||||
return final_tex
|
||||
|
||||
def split_subprocess(txt, project_folder, return_dict, opts):
|
||||
"""
|
||||
break down latex file to a linked list,
|
||||
each node use a preserve flag to indicate whether it should
|
||||
be proccessed by GPT.
|
||||
"""
|
||||
text = txt
|
||||
mask = np.zeros(len(txt), dtype=np.uint8) + TRANSFORM
|
||||
|
||||
# 吸收title与作者以上的部分
|
||||
text, mask = set_forbidden_text(text, mask, r"(.*?)\\maketitle", re.DOTALL)
|
||||
# 吸收iffalse注释
|
||||
text, mask = set_forbidden_text(text, mask, r"\\iffalse(.*?)\\fi", re.DOTALL)
|
||||
# 吸收在42行以内的begin-end组合
|
||||
text, mask = set_forbidden_text_begin_end(text, mask, r"\\begin\{([a-z\*]*)\}(.*?)\\end\{\1\}", re.DOTALL, limit_n_lines=42)
|
||||
# 吸收匿名公式
|
||||
text, mask = set_forbidden_text(text, mask, [ r"\$\$(.*?)\$\$", r"\\\[.*?\\\]" ], re.DOTALL)
|
||||
# 吸收其他杂项
|
||||
text, mask = set_forbidden_text(text, mask, [ r"\\section\{(.*?)\}", r"\\section\*\{(.*?)\}", r"\\subsection\{(.*?)\}", r"\\subsubsection\{(.*?)\}" ])
|
||||
text, mask = set_forbidden_text(text, mask, [ r"\\bibliography\{(.*?)\}", r"\\bibliographystyle\{(.*?)\}" ])
|
||||
text, mask = set_forbidden_text(text, mask, r"\\begin\{thebibliography\}.*?\\end\{thebibliography\}", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, r"\\begin\{lstlisting\}(.*?)\\end\{lstlisting\}", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, r"\\begin\{wraptable\}(.*?)\\end\{wraptable\}", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, r"\\begin\{algorithm\}(.*?)\\end\{algorithm\}", re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{wrapfigure\}(.*?)\\end\{wrapfigure\}", r"\\begin\{wrapfigure\*\}(.*?)\\end\{wrapfigure\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{figure\}(.*?)\\end\{figure\}", r"\\begin\{figure\*\}(.*?)\\end\{figure\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{multline\}(.*?)\\end\{multline\}", r"\\begin\{multline\*\}(.*?)\\end\{multline\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{table\}(.*?)\\end\{table\}", r"\\begin\{table\*\}(.*?)\\end\{table\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{minipage\}(.*?)\\end\{minipage\}", r"\\begin\{minipage\*\}(.*?)\\end\{minipage\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{align\*\}(.*?)\\end\{align\*\}", r"\\begin\{align\}(.*?)\\end\{align\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\begin\{equation\}(.*?)\\end\{equation\}", r"\\begin\{equation\*\}(.*?)\\end\{equation\*\}"], re.DOTALL)
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\includepdf\[(.*?)\]\{(.*?)\}", r"\\clearpage", r"\\newpage", r"\\appendix", r"\\tableofcontents", r"\\include\{(.*?)\}"])
|
||||
text, mask = set_forbidden_text(text, mask, [r"\\vspace\{(.*?)\}", r"\\hspace\{(.*?)\}", r"\\label\{(.*?)\}", r"\\begin\{(.*?)\}", r"\\end\{(.*?)\}", r"\\item "])
|
||||
text, mask = set_forbidden_text_careful_brace(text, mask, r"\\hl\{(.*?)\}", re.DOTALL)
|
||||
# reverse 操作必须放在最后
|
||||
text, mask = reverse_forbidden_text_careful_brace(text, mask, r"\\caption\{(.*?)\}", re.DOTALL, forbid_wrapper=True)
|
||||
text, mask = reverse_forbidden_text_careful_brace(text, mask, r"\\abstract\{(.*?)\}", re.DOTALL, forbid_wrapper=True)
|
||||
root = convert_to_linklist(text, mask)
|
||||
|
||||
# 修复括号
|
||||
node = root
|
||||
while True:
|
||||
string = node.string
|
||||
if node.preserve:
|
||||
node = node.next
|
||||
if node is None: break
|
||||
continue
|
||||
def break_check(string):
|
||||
str_stack = [""] # (lv, index)
|
||||
for i, c in enumerate(string):
|
||||
if c == '{':
|
||||
str_stack.append('{')
|
||||
elif c == '}':
|
||||
if len(str_stack) == 1:
|
||||
print('stack fix')
|
||||
return i
|
||||
str_stack.pop(-1)
|
||||
else:
|
||||
str_stack[-1] += c
|
||||
return -1
|
||||
bp = break_check(string)
|
||||
|
||||
if bp == -1:
|
||||
pass
|
||||
elif bp == 0:
|
||||
node.string = string[:1]
|
||||
q = LinkedListNode(string[1:], False)
|
||||
q.next = node.next
|
||||
node.next = q
|
||||
else:
|
||||
node.string = string[:bp]
|
||||
q = LinkedListNode(string[bp:], False)
|
||||
q.next = node.next
|
||||
node.next = q
|
||||
|
||||
node = node.next
|
||||
if node is None: break
|
||||
|
||||
# 屏蔽空行和太短的句子
|
||||
node = root
|
||||
while True:
|
||||
if len(node.string.strip('\n').strip(''))==0: node.preserve = True
|
||||
if len(node.string.strip('\n').strip(''))<42: node.preserve = True
|
||||
node = node.next
|
||||
if node is None: break
|
||||
node = root
|
||||
while True:
|
||||
if node.next and node.preserve and node.next.preserve:
|
||||
node.string += node.next.string
|
||||
node.next = node.next.next
|
||||
node = node.next
|
||||
if node is None: break
|
||||
|
||||
# 将前后断行符脱离
|
||||
node = root
|
||||
prev_node = None
|
||||
while True:
|
||||
if not node.preserve:
|
||||
lstriped_ = node.string.lstrip().lstrip('\n')
|
||||
if (prev_node is not None) and (prev_node.preserve) and (len(lstriped_)!=len(node.string)):
|
||||
prev_node.string += node.string[:-len(lstriped_)]
|
||||
node.string = lstriped_
|
||||
rstriped_ = node.string.rstrip().rstrip('\n')
|
||||
if (node.next is not None) and (node.next.preserve) and (len(rstriped_)!=len(node.string)):
|
||||
node.next.string = node.string[len(rstriped_):] + node.next.string
|
||||
node.string = rstriped_
|
||||
# =====
|
||||
prev_node = node
|
||||
node = node.next
|
||||
if node is None: break
|
||||
# 输出html调试文件,用红色标注处保留区(PRESERVE),用黑色标注转换区(TRANSFORM)
|
||||
with open(pj(project_folder, 'debug_log.html'), 'w', encoding='utf8') as f:
|
||||
segment_parts_for_gpt = []
|
||||
nodes = []
|
||||
node = root
|
||||
while True:
|
||||
nodes.append(node)
|
||||
show_html = node.string.replace('\n','<br/>')
|
||||
if not node.preserve:
|
||||
segment_parts_for_gpt.append(node.string)
|
||||
f.write(f'<p style="color:black;">#{show_html}#</p>')
|
||||
else:
|
||||
f.write(f'<p style="color:red;">{show_html}</p>')
|
||||
node = node.next
|
||||
if node is None: break
|
||||
|
||||
for n in nodes: n.next = None # break
|
||||
return_dict['nodes'] = nodes
|
||||
return_dict['segment_parts_for_gpt'] = segment_parts_for_gpt
|
||||
return return_dict
|
||||
|
||||
|
||||
|
||||
class LatexPaperSplit():
|
||||
"""
|
||||
break down latex file to a linked list,
|
||||
each node use a preserve flag to indicate whether it should
|
||||
be proccessed by GPT.
|
||||
"""
|
||||
def __init__(self) -> None:
|
||||
self.nodes = None
|
||||
self.msg = "*{\\scriptsize\\textbf{警告:该PDF由GPT-Academic开源项目调用大语言模型+Latex翻译插件一键生成," + \
|
||||
"版权归原文作者所有。翻译内容可靠性无保障,请仔细鉴别并以原文为准。" + \
|
||||
"项目Github地址 \\url{https://github.com/binary-husky/gpt_academic/}。"
|
||||
# 请您不要删除或修改这行警告,除非您是论文的原作者(如果您是论文原作者,欢迎加REAME中的QQ联系开发者)
|
||||
self.msg_declare = "为了防止大语言模型的意外谬误产生扩散影响,禁止移除或修改此警告。}}\\\\"
|
||||
|
||||
def merge_result(self, arr, mode, msg):
|
||||
"""
|
||||
Merge the result after the GPT process completed
|
||||
"""
|
||||
result_string = ""
|
||||
p = 0
|
||||
for node in self.nodes:
|
||||
if node.preserve:
|
||||
result_string += node.string
|
||||
else:
|
||||
result_string += fix_content(arr[p], node.string)
|
||||
p += 1
|
||||
if mode == 'translate_zh':
|
||||
pattern = re.compile(r'\\begin\{abstract\}.*\n')
|
||||
match = pattern.search(result_string)
|
||||
if not match:
|
||||
# match \abstract{xxxx}
|
||||
pattern_compile = re.compile(r"\\abstract\{(.*?)\}", flags=re.DOTALL)
|
||||
match = pattern_compile.search(result_string)
|
||||
position = match.regs[1][0]
|
||||
else:
|
||||
# match \begin{abstract}xxxx\end{abstract}
|
||||
position = match.end()
|
||||
result_string = result_string[:position] + self.msg + msg + self.msg_declare + result_string[position:]
|
||||
return result_string
|
||||
|
||||
def split(self, txt, project_folder, opts):
|
||||
"""
|
||||
break down latex file to a linked list,
|
||||
each node use a preserve flag to indicate whether it should
|
||||
be proccessed by GPT.
|
||||
P.S. use multiprocessing to avoid timeout error
|
||||
"""
|
||||
import multiprocessing
|
||||
manager = multiprocessing.Manager()
|
||||
return_dict = manager.dict()
|
||||
p = multiprocessing.Process(
|
||||
target=split_subprocess,
|
||||
args=(txt, project_folder, return_dict, opts))
|
||||
p.start()
|
||||
p.join()
|
||||
p.close()
|
||||
self.nodes = return_dict['nodes']
|
||||
self.sp = return_dict['segment_parts_for_gpt']
|
||||
return self.sp
|
||||
|
||||
|
||||
|
||||
class LatexPaperFileGroup():
|
||||
"""
|
||||
use tokenizer to break down text according to max_token_limit
|
||||
"""
|
||||
def __init__(self):
|
||||
self.file_paths = []
|
||||
self.file_contents = []
|
||||
self.sp_file_contents = []
|
||||
self.sp_file_index = []
|
||||
self.sp_file_tag = []
|
||||
|
||||
# count_token
|
||||
from request_llm.bridge_all import model_info
|
||||
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
||||
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||
self.get_token_num = get_token_num
|
||||
|
||||
def run_file_split(self, max_token_limit=1900):
|
||||
"""
|
||||
use tokenizer to break down text according to max_token_limit
|
||||
"""
|
||||
for index, file_content in enumerate(self.file_contents):
|
||||
if self.get_token_num(file_content) < max_token_limit:
|
||||
self.sp_file_contents.append(file_content)
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index])
|
||||
else:
|
||||
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
||||
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
|
||||
for j, segment in enumerate(segments):
|
||||
self.sp_file_contents.append(segment)
|
||||
self.sp_file_index.append(index)
|
||||
self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
|
||||
print('Segmentation: done')
|
||||
|
||||
def merge_result(self):
|
||||
self.file_result = ["" for _ in range(len(self.file_paths))]
|
||||
for r, k in zip(self.sp_file_result, self.sp_file_index):
|
||||
self.file_result[k] += r
|
||||
|
||||
def write_result(self):
|
||||
manifest = []
|
||||
for path, res in zip(self.file_paths, self.file_result):
|
||||
with open(path + '.polish.tex', 'w', encoding='utf8') as f:
|
||||
manifest.append(path + '.polish.tex')
|
||||
f.write(res)
|
||||
return manifest
|
||||
|
||||
def write_html(sp_file_contents, sp_file_result, chatbot, project_folder):
|
||||
|
||||
# write html
|
||||
try:
|
||||
import shutil
|
||||
from .crazy_utils import construct_html
|
||||
from toolbox import gen_time_str
|
||||
ch = construct_html()
|
||||
orig = ""
|
||||
trans = ""
|
||||
final = []
|
||||
for c,r in zip(sp_file_contents, sp_file_result):
|
||||
final.append(c)
|
||||
final.append(r)
|
||||
for i, k in enumerate(final):
|
||||
if i%2==0:
|
||||
orig = k
|
||||
if i%2==1:
|
||||
trans = k
|
||||
ch.add_row(a=orig, b=trans)
|
||||
create_report_file_name = f"{gen_time_str()}.trans.html"
|
||||
ch.save_file(create_report_file_name)
|
||||
shutil.copyfile(pj('./gpt_log/', create_report_file_name), pj(project_folder, create_report_file_name))
|
||||
promote_file_to_downloadzone(file=f'./gpt_log/{create_report_file_name}', chatbot=chatbot)
|
||||
except:
|
||||
from toolbox import trimmed_format_exc
|
||||
print('writing html result failed:', trimmed_format_exc())
|
||||
|
||||
def Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, mode='proofread', switch_prompt=None, opts=[]):
|
||||
import time, os, re
|
||||
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from .latex_utils import LatexPaperFileGroup, merge_tex_files, LatexPaperSplit, 寻找Latex主文件
|
||||
|
||||
# <-------- 寻找主tex文件 ---------->
|
||||
maintex = 寻找Latex主文件(file_manifest, mode)
|
||||
chatbot.append((f"定位主Latex文件", f'[Local Message] 分析结果:该项目的Latex主文件是{maintex}, 如果分析错误, 请立即终止程序, 删除或修改歧义文件, 然后重试。主程序即将开始, 请稍候。'))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
time.sleep(3)
|
||||
|
||||
# <-------- 读取Latex文件, 将多文件tex工程融合为一个巨型tex ---------->
|
||||
main_tex_basename = os.path.basename(maintex)
|
||||
assert main_tex_basename.endswith('.tex')
|
||||
main_tex_basename_bare = main_tex_basename[:-4]
|
||||
may_exist_bbl = pj(project_folder, f'{main_tex_basename_bare}.bbl')
|
||||
if os.path.exists(may_exist_bbl):
|
||||
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge.bbl'))
|
||||
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_{mode}.bbl'))
|
||||
shutil.copyfile(may_exist_bbl, pj(project_folder, f'merge_diff.bbl'))
|
||||
|
||||
with open(maintex, 'r', encoding='utf-8', errors='replace') as f:
|
||||
content = f.read()
|
||||
merged_content = merge_tex_files(project_folder, content, mode)
|
||||
|
||||
with open(project_folder + '/merge.tex', 'w', encoding='utf-8', errors='replace') as f:
|
||||
f.write(merged_content)
|
||||
|
||||
# <-------- 精细切分latex文件 ---------->
|
||||
chatbot.append((f"Latex文件融合完成", f'[Local Message] 正在精细切分latex文件,这需要一段时间计算,文档越长耗时越长,请耐心等待。'))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
lps = LatexPaperSplit()
|
||||
res = lps.split(merged_content, project_folder, opts) # 消耗时间的函数
|
||||
|
||||
# <-------- 拆分过长的latex片段 ---------->
|
||||
pfg = LatexPaperFileGroup()
|
||||
for index, r in enumerate(res):
|
||||
pfg.file_paths.append('segment-' + str(index))
|
||||
pfg.file_contents.append(r)
|
||||
|
||||
pfg.run_file_split(max_token_limit=1024)
|
||||
n_split = len(pfg.sp_file_contents)
|
||||
|
||||
# <-------- 根据需要切换prompt ---------->
|
||||
inputs_array, sys_prompt_array = switch_prompt(pfg, mode)
|
||||
inputs_show_user_array = [f"{mode} {f}" for f in pfg.sp_file_tag]
|
||||
|
||||
if os.path.exists(pj(project_folder,'temp.pkl')):
|
||||
|
||||
# <-------- 【仅调试】如果存在调试缓存文件,则跳过GPT请求环节 ---------->
|
||||
pfg = objload(file=pj(project_folder,'temp.pkl'))
|
||||
|
||||
else:
|
||||
# <-------- gpt 多线程请求 ---------->
|
||||
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array=inputs_array,
|
||||
inputs_show_user_array=inputs_show_user_array,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history_array=[[""] for _ in range(n_split)],
|
||||
sys_prompt_array=sys_prompt_array,
|
||||
# max_workers=5, # 并行任务数量限制, 最多同时执行5个, 其他的排队等待
|
||||
scroller_max_len = 40
|
||||
)
|
||||
|
||||
# <-------- 文本碎片重组为完整的tex片段 ---------->
|
||||
pfg.sp_file_result = []
|
||||
for i_say, gpt_say, orig_content in zip(gpt_response_collection[0::2], gpt_response_collection[1::2], pfg.sp_file_contents):
|
||||
pfg.sp_file_result.append(gpt_say)
|
||||
pfg.merge_result()
|
||||
|
||||
# <-------- 临时存储用于调试 ---------->
|
||||
pfg.get_token_num = None
|
||||
objdump(pfg, file=pj(project_folder,'temp.pkl'))
|
||||
|
||||
write_html(pfg.sp_file_contents, pfg.sp_file_result, chatbot=chatbot, project_folder=project_folder)
|
||||
|
||||
# <-------- 写出文件 ---------->
|
||||
msg = f"当前大语言模型: {llm_kwargs['llm_model']},当前语言模型温度设定: {llm_kwargs['temperature']}。"
|
||||
final_tex = lps.merge_result(pfg.file_result, mode, msg)
|
||||
with open(project_folder + f'/merge_{mode}.tex', 'w', encoding='utf-8', errors='replace') as f:
|
||||
if mode != 'translate_zh' or "binary" in final_tex: f.write(final_tex)
|
||||
|
||||
|
||||
# <-------- 整理结果, 退出 ---------->
|
||||
chatbot.append((f"完成了吗?", 'GPT结果已输出, 正在编译PDF'))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# <-------- 返回 ---------->
|
||||
return project_folder + f'/merge_{mode}.tex'
|
||||
|
||||
|
||||
|
||||
def remove_buggy_lines(file_path, log_path, tex_name, tex_name_pure, n_fix, work_folder_modified):
|
||||
try:
|
||||
with open(log_path, 'r', encoding='utf-8', errors='replace') as f:
|
||||
log = f.read()
|
||||
with open(file_path, 'r', encoding='utf-8', errors='replace') as f:
|
||||
file_lines = f.readlines()
|
||||
import re
|
||||
buggy_lines = re.findall(tex_name+':([0-9]{1,5}):', log)
|
||||
buggy_lines = [int(l) for l in buggy_lines]
|
||||
buggy_lines = sorted(buggy_lines)
|
||||
print("removing lines that has errors", buggy_lines)
|
||||
file_lines.pop(buggy_lines[0]-1)
|
||||
with open(pj(work_folder_modified, f"{tex_name_pure}_fix_{n_fix}.tex"), 'w', encoding='utf-8', errors='replace') as f:
|
||||
f.writelines(file_lines)
|
||||
return True, f"{tex_name_pure}_fix_{n_fix}", buggy_lines
|
||||
except:
|
||||
print("Fatal error occurred, but we cannot identify error, please download zip, read latex log, and compile manually.")
|
||||
return False, -1, [-1]
|
||||
|
||||
|
||||
def compile_latex_with_timeout(command, timeout=60):
|
||||
import subprocess
|
||||
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
try:
|
||||
stdout, stderr = process.communicate(timeout=timeout)
|
||||
except subprocess.TimeoutExpired:
|
||||
process.kill()
|
||||
stdout, stderr = process.communicate()
|
||||
print("Process timed out!")
|
||||
return False
|
||||
return True
|
||||
|
||||
def 编译Latex(chatbot, history, main_file_original, main_file_modified, work_folder_original, work_folder_modified, work_folder, mode='default'):
|
||||
import os, time
|
||||
current_dir = os.getcwd()
|
||||
n_fix = 1
|
||||
max_try = 32
|
||||
chatbot.append([f"正在编译PDF文档", f'编译已经开始。当前工作路径为{work_folder},如果程序停顿5分钟以上,请直接去该路径下取回翻译结果,或者重启之后再度尝试 ...']); yield from update_ui(chatbot=chatbot, history=history)
|
||||
chatbot.append([f"正在编译PDF文档", '...']); yield from update_ui(chatbot=chatbot, history=history); time.sleep(1); chatbot[-1] = list(chatbot[-1]) # 刷新界面
|
||||
yield from update_ui_lastest_msg('编译已经开始...', chatbot, history) # 刷新Gradio前端界面
|
||||
|
||||
while True:
|
||||
import os
|
||||
|
||||
# https://stackoverflow.com/questions/738755/dont-make-me-manually-abort-a-latex-compile-when-theres-an-error
|
||||
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译原始PDF ...', chatbot, history) # 刷新Gradio前端界面
|
||||
os.chdir(work_folder_original); ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex'); os.chdir(current_dir)
|
||||
|
||||
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译转化后的PDF ...', chatbot, history) # 刷新Gradio前端界面
|
||||
os.chdir(work_folder_modified); ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex'); os.chdir(current_dir)
|
||||
|
||||
if ok and os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf')):
|
||||
# 只有第二步成功,才能继续下面的步骤
|
||||
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译BibTex ...', chatbot, history) # 刷新Gradio前端界面
|
||||
if not os.path.exists(pj(work_folder_original, f'{main_file_original}.bbl')):
|
||||
os.chdir(work_folder_original); ok = compile_latex_with_timeout(f'bibtex {main_file_original}.aux'); os.chdir(current_dir)
|
||||
if not os.path.exists(pj(work_folder_modified, f'{main_file_modified}.bbl')):
|
||||
os.chdir(work_folder_modified); ok = compile_latex_with_timeout(f'bibtex {main_file_modified}.aux'); os.chdir(current_dir)
|
||||
|
||||
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 编译文献交叉引用 ...', chatbot, history) # 刷新Gradio前端界面
|
||||
os.chdir(work_folder_original); ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex'); os.chdir(current_dir)
|
||||
os.chdir(work_folder_modified); ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex'); os.chdir(current_dir)
|
||||
os.chdir(work_folder_original); ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_original}.tex'); os.chdir(current_dir)
|
||||
os.chdir(work_folder_modified); ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error {main_file_modified}.tex'); os.chdir(current_dir)
|
||||
|
||||
if mode!='translate_zh':
|
||||
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 使用latexdiff生成论文转化前后对比 ...', chatbot, history) # 刷新Gradio前端界面
|
||||
print( f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex')
|
||||
ok = compile_latex_with_timeout(f'latexdiff --encoding=utf8 --append-safecmd=subfile {work_folder_original}/{main_file_original}.tex {work_folder_modified}/{main_file_modified}.tex --flatten > {work_folder}/merge_diff.tex')
|
||||
|
||||
yield from update_ui_lastest_msg(f'尝试第 {n_fix}/{max_try} 次编译, 正在编译对比PDF ...', chatbot, history) # 刷新Gradio前端界面
|
||||
os.chdir(work_folder); ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex'); os.chdir(current_dir)
|
||||
os.chdir(work_folder); ok = compile_latex_with_timeout(f'bibtex merge_diff.aux'); os.chdir(current_dir)
|
||||
os.chdir(work_folder); ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex'); os.chdir(current_dir)
|
||||
os.chdir(work_folder); ok = compile_latex_with_timeout(f'pdflatex -interaction=batchmode -file-line-error merge_diff.tex'); os.chdir(current_dir)
|
||||
|
||||
# <--------------------->
|
||||
os.chdir(current_dir)
|
||||
|
||||
# <---------- 检查结果 ----------->
|
||||
results_ = ""
|
||||
original_pdf_success = os.path.exists(pj(work_folder_original, f'{main_file_original}.pdf'))
|
||||
modified_pdf_success = os.path.exists(pj(work_folder_modified, f'{main_file_modified}.pdf'))
|
||||
diff_pdf_success = os.path.exists(pj(work_folder, f'merge_diff.pdf'))
|
||||
results_ += f"原始PDF编译是否成功: {original_pdf_success};"
|
||||
results_ += f"转化PDF编译是否成功: {modified_pdf_success};"
|
||||
results_ += f"对比PDF编译是否成功: {diff_pdf_success};"
|
||||
yield from update_ui_lastest_msg(f'第{n_fix}编译结束:<br/>{results_}...', chatbot, history) # 刷新Gradio前端界面
|
||||
|
||||
if diff_pdf_success:
|
||||
result_pdf = pj(work_folder_modified, f'merge_diff.pdf') # get pdf path
|
||||
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
||||
if modified_pdf_success:
|
||||
yield from update_ui_lastest_msg(f'转化PDF编译已经成功, 即将退出 ...', chatbot, history) # 刷新Gradio前端界面
|
||||
result_pdf = pj(work_folder_modified, f'{main_file_modified}.pdf') # get pdf path
|
||||
if os.path.exists(pj(work_folder, '..', 'translation')):
|
||||
shutil.copyfile(result_pdf, pj(work_folder, '..', 'translation', 'translate_zh.pdf'))
|
||||
promote_file_to_downloadzone(result_pdf, rename_file=None, chatbot=chatbot) # promote file to web UI
|
||||
return True # 成功啦
|
||||
else:
|
||||
if n_fix>=max_try: break
|
||||
n_fix += 1
|
||||
can_retry, main_file_modified, buggy_lines = remove_buggy_lines(
|
||||
file_path=pj(work_folder_modified, f'{main_file_modified}.tex'),
|
||||
log_path=pj(work_folder_modified, f'{main_file_modified}.log'),
|
||||
tex_name=f'{main_file_modified}.tex',
|
||||
tex_name_pure=f'{main_file_modified}',
|
||||
n_fix=n_fix,
|
||||
work_folder_modified=work_folder_modified,
|
||||
)
|
||||
yield from update_ui_lastest_msg(f'由于最为关键的转化PDF编译失败, 将根据报错信息修正tex源文件并重试, 当前报错的latex代码处于第{buggy_lines}行 ...', chatbot, history) # 刷新Gradio前端界面
|
||||
if not can_retry: break
|
||||
|
||||
os.chdir(current_dir)
|
||||
return False # 失败啦
|
||||
|
||||
|
||||
|
||||
256
crazy_functions/live_audio/aliyunASR.py
Normal file
256
crazy_functions/live_audio/aliyunASR.py
Normal file
@@ -0,0 +1,256 @@
|
||||
import time, json, sys, struct
|
||||
import numpy as np
|
||||
from loguru import logger as logging
|
||||
from scipy.io.wavfile import WAVE_FORMAT
|
||||
|
||||
def write_numpy_to_wave(filename, rate, data, add_header=False):
|
||||
"""
|
||||
Write a NumPy array as a WAV file.
|
||||
"""
|
||||
def _array_tofile(fid, data):
|
||||
# ravel gives a c-contiguous buffer
|
||||
fid.write(data.ravel().view('b').data)
|
||||
|
||||
if hasattr(filename, 'write'):
|
||||
fid = filename
|
||||
else:
|
||||
fid = open(filename, 'wb')
|
||||
|
||||
fs = rate
|
||||
|
||||
try:
|
||||
dkind = data.dtype.kind
|
||||
if not (dkind == 'i' or dkind == 'f' or (dkind == 'u' and
|
||||
data.dtype.itemsize == 1)):
|
||||
raise ValueError("Unsupported data type '%s'" % data.dtype)
|
||||
|
||||
header_data = b''
|
||||
|
||||
header_data += b'RIFF'
|
||||
header_data += b'\x00\x00\x00\x00'
|
||||
header_data += b'WAVE'
|
||||
|
||||
# fmt chunk
|
||||
header_data += b'fmt '
|
||||
if dkind == 'f':
|
||||
format_tag = WAVE_FORMAT.IEEE_FLOAT
|
||||
else:
|
||||
format_tag = WAVE_FORMAT.PCM
|
||||
if data.ndim == 1:
|
||||
channels = 1
|
||||
else:
|
||||
channels = data.shape[1]
|
||||
bit_depth = data.dtype.itemsize * 8
|
||||
bytes_per_second = fs*(bit_depth // 8)*channels
|
||||
block_align = channels * (bit_depth // 8)
|
||||
|
||||
fmt_chunk_data = struct.pack('<HHIIHH', format_tag, channels, fs,
|
||||
bytes_per_second, block_align, bit_depth)
|
||||
if not (dkind == 'i' or dkind == 'u'):
|
||||
# add cbSize field for non-PCM files
|
||||
fmt_chunk_data += b'\x00\x00'
|
||||
|
||||
header_data += struct.pack('<I', len(fmt_chunk_data))
|
||||
header_data += fmt_chunk_data
|
||||
|
||||
# fact chunk (non-PCM files)
|
||||
if not (dkind == 'i' or dkind == 'u'):
|
||||
header_data += b'fact'
|
||||
header_data += struct.pack('<II', 4, data.shape[0])
|
||||
|
||||
# check data size (needs to be immediately before the data chunk)
|
||||
if ((len(header_data)-4-4) + (4+4+data.nbytes)) > 0xFFFFFFFF:
|
||||
raise ValueError("Data exceeds wave file size limit")
|
||||
if add_header:
|
||||
fid.write(header_data)
|
||||
# data chunk
|
||||
fid.write(b'data')
|
||||
fid.write(struct.pack('<I', data.nbytes))
|
||||
if data.dtype.byteorder == '>' or (data.dtype.byteorder == '=' and
|
||||
sys.byteorder == 'big'):
|
||||
data = data.byteswap()
|
||||
_array_tofile(fid, data)
|
||||
|
||||
if add_header:
|
||||
# Determine file size and place it in correct
|
||||
# position at start of the file.
|
||||
size = fid.tell()
|
||||
fid.seek(4)
|
||||
fid.write(struct.pack('<I', size-8))
|
||||
|
||||
finally:
|
||||
if not hasattr(filename, 'write'):
|
||||
fid.close()
|
||||
else:
|
||||
fid.seek(0)
|
||||
|
||||
def is_speaker_speaking(vad, data, sample_rate):
|
||||
# Function to detect if the speaker is speaking
|
||||
# The WebRTC VAD only accepts 16-bit mono PCM audio,
|
||||
# sampled at 8000, 16000, 32000 or 48000 Hz.
|
||||
# A frame must be either 10, 20, or 30 ms in duration:
|
||||
frame_duration = 30
|
||||
n_bit_each = int(sample_rate * frame_duration / 1000)*2 # x2 because audio is 16 bit (2 bytes)
|
||||
res_list = []
|
||||
for t in range(len(data)):
|
||||
if t!=0 and t % n_bit_each == 0:
|
||||
res_list.append(vad.is_speech(data[t-n_bit_each:t], sample_rate))
|
||||
|
||||
info = ''.join(['^' if r else '.' for r in res_list])
|
||||
info = info[:10]
|
||||
if any(res_list):
|
||||
return True, info
|
||||
else:
|
||||
return False, info
|
||||
|
||||
|
||||
class AliyunASR():
|
||||
|
||||
def test_on_sentence_begin(self, message, *args):
|
||||
pass
|
||||
|
||||
def test_on_sentence_end(self, message, *args):
|
||||
message = json.loads(message)
|
||||
self.parsed_sentence = message['payload']['result']
|
||||
self.event_on_entence_end.set()
|
||||
|
||||
def test_on_start(self, message, *args):
|
||||
pass
|
||||
|
||||
def test_on_error(self, message, *args):
|
||||
logging.error("on_error args=>{}".format(args))
|
||||
pass
|
||||
|
||||
def test_on_close(self, *args):
|
||||
self.aliyun_service_ok = False
|
||||
pass
|
||||
|
||||
def test_on_result_chg(self, message, *args):
|
||||
message = json.loads(message)
|
||||
self.parsed_text = message['payload']['result']
|
||||
self.event_on_result_chg.set()
|
||||
|
||||
def test_on_completed(self, message, *args):
|
||||
pass
|
||||
|
||||
def audio_convertion_thread(self, uuid):
|
||||
# 在一个异步线程中采集音频
|
||||
import nls # pip install git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
|
||||
import tempfile
|
||||
from scipy import io
|
||||
from toolbox import get_conf
|
||||
from .audio_io import change_sample_rate
|
||||
from .audio_io import RealtimeAudioDistribution
|
||||
NEW_SAMPLERATE = 16000
|
||||
rad = RealtimeAudioDistribution()
|
||||
rad.clean_up()
|
||||
temp_folder = tempfile.gettempdir()
|
||||
TOKEN, APPKEY = get_conf('ALIYUN_TOKEN', 'ALIYUN_APPKEY')
|
||||
if len(TOKEN) == 0:
|
||||
TOKEN = self.get_token()
|
||||
self.aliyun_service_ok = True
|
||||
URL="wss://nls-gateway.aliyuncs.com/ws/v1"
|
||||
sr = nls.NlsSpeechTranscriber(
|
||||
url=URL,
|
||||
token=TOKEN,
|
||||
appkey=APPKEY,
|
||||
on_sentence_begin=self.test_on_sentence_begin,
|
||||
on_sentence_end=self.test_on_sentence_end,
|
||||
on_start=self.test_on_start,
|
||||
on_result_changed=self.test_on_result_chg,
|
||||
on_completed=self.test_on_completed,
|
||||
on_error=self.test_on_error,
|
||||
on_close=self.test_on_close,
|
||||
callback_args=[uuid.hex]
|
||||
)
|
||||
timeout_limit_second = 20
|
||||
r = sr.start(aformat="pcm",
|
||||
timeout=timeout_limit_second,
|
||||
enable_intermediate_result=True,
|
||||
enable_punctuation_prediction=True,
|
||||
enable_inverse_text_normalization=True)
|
||||
|
||||
import webrtcvad
|
||||
vad = webrtcvad.Vad()
|
||||
vad.set_mode(1)
|
||||
|
||||
is_previous_frame_transmitted = False # 上一帧是否有人说话
|
||||
previous_frame_data = None
|
||||
echo_cnt = 0 # 在没有声音之后,继续向服务器发送n次音频数据
|
||||
echo_cnt_max = 4 # 在没有声音之后,继续向服务器发送n次音频数据
|
||||
keep_alive_last_send_time = time.time()
|
||||
while not self.stop:
|
||||
# time.sleep(self.capture_interval)
|
||||
audio = rad.read(uuid.hex)
|
||||
if audio is not None:
|
||||
# convert to pcm file
|
||||
temp_file = f'{temp_folder}/{uuid.hex}.pcm' #
|
||||
dsdata = change_sample_rate(audio, rad.rate, NEW_SAMPLERATE) # 48000 --> 16000
|
||||
write_numpy_to_wave(temp_file, NEW_SAMPLERATE, dsdata)
|
||||
# read pcm binary
|
||||
with open(temp_file, "rb") as f: data = f.read()
|
||||
is_speaking, info = is_speaker_speaking(vad, data, NEW_SAMPLERATE)
|
||||
|
||||
if is_speaking or echo_cnt > 0:
|
||||
# 如果话筒激活 / 如果处于回声收尾阶段
|
||||
echo_cnt -= 1
|
||||
if not is_previous_frame_transmitted: # 上一帧没有人声,但是我们把上一帧同样加上
|
||||
if previous_frame_data is not None: data = previous_frame_data + data
|
||||
if is_speaking:
|
||||
echo_cnt = echo_cnt_max
|
||||
slices = zip(*(iter(data),) * 640) # 640个字节为一组
|
||||
for i in slices: sr.send_audio(bytes(i))
|
||||
keep_alive_last_send_time = time.time()
|
||||
is_previous_frame_transmitted = True
|
||||
else:
|
||||
is_previous_frame_transmitted = False
|
||||
echo_cnt = 0
|
||||
# 保持链接激活,即使没有声音,也根据时间间隔,发送一些音频片段给服务器
|
||||
if time.time() - keep_alive_last_send_time > timeout_limit_second/2:
|
||||
slices = zip(*(iter(data),) * 640) # 640个字节为一组
|
||||
for i in slices: sr.send_audio(bytes(i))
|
||||
keep_alive_last_send_time = time.time()
|
||||
is_previous_frame_transmitted = True
|
||||
self.audio_shape = info
|
||||
else:
|
||||
time.sleep(0.1)
|
||||
|
||||
if not self.aliyun_service_ok:
|
||||
self.stop = True
|
||||
self.stop_msg = 'Aliyun音频服务异常,请检查ALIYUN_TOKEN和ALIYUN_APPKEY是否过期。'
|
||||
r = sr.stop()
|
||||
|
||||
def get_token(self):
|
||||
from toolbox import get_conf
|
||||
import json
|
||||
from aliyunsdkcore.request import CommonRequest
|
||||
from aliyunsdkcore.client import AcsClient
|
||||
AccessKey_ID, AccessKey_secret = get_conf('ALIYUN_ACCESSKEY', 'ALIYUN_SECRET')
|
||||
|
||||
# 创建AcsClient实例
|
||||
client = AcsClient(
|
||||
AccessKey_ID,
|
||||
AccessKey_secret,
|
||||
"cn-shanghai"
|
||||
)
|
||||
|
||||
# 创建request,并设置参数。
|
||||
request = CommonRequest()
|
||||
request.set_method('POST')
|
||||
request.set_domain('nls-meta.cn-shanghai.aliyuncs.com')
|
||||
request.set_version('2019-02-28')
|
||||
request.set_action_name('CreateToken')
|
||||
|
||||
try:
|
||||
response = client.do_action_with_exception(request)
|
||||
logging.info(response)
|
||||
jss = json.loads(response)
|
||||
if 'Token' in jss and 'Id' in jss['Token']:
|
||||
token = jss['Token']['Id']
|
||||
expireTime = jss['Token']['ExpireTime']
|
||||
logging.info("token = " + token)
|
||||
logging.info("expireTime = " + str(expireTime))
|
||||
except Exception as e:
|
||||
logging.error(e)
|
||||
|
||||
return token
|
||||
51
crazy_functions/live_audio/audio_io.py
Normal file
51
crazy_functions/live_audio/audio_io.py
Normal file
@@ -0,0 +1,51 @@
|
||||
import numpy as np
|
||||
from scipy import interpolate
|
||||
|
||||
def Singleton(cls):
|
||||
_instance = {}
|
||||
|
||||
def _singleton(*args, **kargs):
|
||||
if cls not in _instance:
|
||||
_instance[cls] = cls(*args, **kargs)
|
||||
return _instance[cls]
|
||||
|
||||
return _singleton
|
||||
|
||||
|
||||
@Singleton
|
||||
class RealtimeAudioDistribution():
|
||||
def __init__(self) -> None:
|
||||
self.data = {}
|
||||
self.max_len = 1024*1024
|
||||
self.rate = 48000 # 只读,每秒采样数量
|
||||
|
||||
def clean_up(self):
|
||||
self.data = {}
|
||||
|
||||
def feed(self, uuid, audio):
|
||||
self.rate, audio_ = audio
|
||||
# print('feed', len(audio_), audio_[-25:])
|
||||
if uuid not in self.data:
|
||||
self.data[uuid] = audio_
|
||||
else:
|
||||
new_arr = np.concatenate((self.data[uuid], audio_))
|
||||
if len(new_arr) > self.max_len: new_arr = new_arr[-self.max_len:]
|
||||
self.data[uuid] = new_arr
|
||||
|
||||
def read(self, uuid):
|
||||
if uuid in self.data:
|
||||
res = self.data.pop(uuid)
|
||||
# print('\r read-', len(res), '-', max(res), end='', flush=True)
|
||||
else:
|
||||
res = None
|
||||
return res
|
||||
|
||||
def change_sample_rate(audio, old_sr, new_sr):
|
||||
duration = audio.shape[0] / old_sr
|
||||
|
||||
time_old = np.linspace(0, duration, audio.shape[0])
|
||||
time_new = np.linspace(0, duration, int(audio.shape[0] * new_sr / old_sr))
|
||||
|
||||
interpolator = interpolate.interp1d(time_old, audio.T)
|
||||
new_audio = interpolator(time_new).T
|
||||
return new_audio.astype(np.int16)
|
||||
43
crazy_functions/media_fns/get_media.py
Normal file
43
crazy_functions/media_fns/get_media.py
Normal file
@@ -0,0 +1,43 @@
|
||||
from toolbox import update_ui, get_conf, promote_file_to_downloadzone, update_ui_latest_msg, generate_file_link
|
||||
from shared_utils.docker_as_service_api import stream_daas
|
||||
from shared_utils.docker_as_service_api import DockerServiceApiComModel
|
||||
import random
|
||||
|
||||
def download_video(video_id, only_audio, user_name, chatbot, history):
|
||||
from toolbox import get_log_folder
|
||||
chatbot.append([None, "Processing..."])
|
||||
yield from update_ui(chatbot, history)
|
||||
client_command = f'{video_id} --audio-only' if only_audio else video_id
|
||||
server_urls = get_conf('DAAS_SERVER_URLS')
|
||||
server_url = random.choice(server_urls)
|
||||
docker_service_api_com_model = DockerServiceApiComModel(client_command=client_command)
|
||||
save_file_dir = get_log_folder(user_name, plugin_name='media_downloader')
|
||||
for output_manifest in stream_daas(docker_service_api_com_model, server_url, save_file_dir):
|
||||
status_buf = ""
|
||||
status_buf += "DaaS message: \n\n"
|
||||
status_buf += output_manifest['server_message'].replace('\n', '<br/>')
|
||||
status_buf += "\n\n"
|
||||
status_buf += "DaaS standard error: \n\n"
|
||||
status_buf += output_manifest['server_std_err'].replace('\n', '<br/>')
|
||||
status_buf += "\n\n"
|
||||
status_buf += "DaaS standard output: \n\n"
|
||||
status_buf += output_manifest['server_std_out'].replace('\n', '<br/>')
|
||||
status_buf += "\n\n"
|
||||
status_buf += "DaaS file attach: \n\n"
|
||||
status_buf += str(output_manifest['server_file_attach'])
|
||||
yield from update_ui_latest_msg(status_buf, chatbot, history)
|
||||
|
||||
return output_manifest['server_file_attach']
|
||||
|
||||
|
||||
def search_videos(keywords):
|
||||
from toolbox import get_log_folder
|
||||
client_command = keywords
|
||||
server_urls = get_conf('DAAS_SERVER_URLS')
|
||||
server_url = random.choice(server_urls)
|
||||
server_url = server_url.replace('stream', 'search')
|
||||
docker_service_api_com_model = DockerServiceApiComModel(client_command=client_command)
|
||||
save_file_dir = get_log_folder("default_user", plugin_name='media_downloader')
|
||||
for output_manifest in stream_daas(docker_service_api_com_model, server_url, save_file_dir):
|
||||
return output_manifest['server_message']
|
||||
|
||||
93
crazy_functions/multi_stage/multi_stage_utils.py
Normal file
93
crazy_functions/multi_stage/multi_stage_utils.py
Normal file
@@ -0,0 +1,93 @@
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List
|
||||
from toolbox import update_ui_latest_msg, disable_auto_promotion
|
||||
from toolbox import CatchException, update_ui, get_conf, select_api_key, get_log_folder
|
||||
from request_llms.bridge_all import predict_no_ui_long_connection
|
||||
from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
|
||||
import time
|
||||
import pickle
|
||||
|
||||
def have_any_recent_upload_files(chatbot):
|
||||
_5min = 5 * 60
|
||||
if not chatbot: return False # chatbot is None
|
||||
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
|
||||
if not most_recent_uploaded: return False # most_recent_uploaded is None
|
||||
if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
|
||||
else: return False # most_recent_uploaded is too old
|
||||
|
||||
class GptAcademicState():
|
||||
def __init__(self):
|
||||
self.reset()
|
||||
|
||||
def reset(self):
|
||||
pass
|
||||
|
||||
def dump_state(self, chatbot):
|
||||
chatbot._cookies['plugin_state'] = pickle.dumps(self)
|
||||
|
||||
def set_state(self, chatbot, key, value):
|
||||
setattr(self, key, value)
|
||||
chatbot._cookies['plugin_state'] = pickle.dumps(self)
|
||||
|
||||
def get_state(chatbot, cls=None):
|
||||
state = chatbot._cookies.get('plugin_state', None)
|
||||
if state is not None: state = pickle.loads(state)
|
||||
elif cls is not None: state = cls()
|
||||
else: state = GptAcademicState()
|
||||
state.chatbot = chatbot
|
||||
return state
|
||||
|
||||
|
||||
class GptAcademicGameBaseState():
|
||||
"""
|
||||
1. first init: __init__ ->
|
||||
"""
|
||||
def init_game(self, chatbot, lock_plugin):
|
||||
self.plugin_name = None
|
||||
self.callback_fn = None
|
||||
self.delete_game = False
|
||||
self.step_cnt = 0
|
||||
|
||||
def lock_plugin(self, chatbot):
|
||||
if self.callback_fn is None:
|
||||
raise ValueError("callback_fn is None")
|
||||
chatbot._cookies['lock_plugin'] = self.callback_fn
|
||||
self.dump_state(chatbot)
|
||||
|
||||
def get_plugin_name(self):
|
||||
if self.plugin_name is None:
|
||||
raise ValueError("plugin_name is None")
|
||||
return self.plugin_name
|
||||
|
||||
def dump_state(self, chatbot):
|
||||
chatbot._cookies[f'plugin_state/{self.get_plugin_name()}'] = pickle.dumps(self)
|
||||
|
||||
def set_state(self, chatbot, key, value):
|
||||
setattr(self, key, value)
|
||||
chatbot._cookies[f'plugin_state/{self.get_plugin_name()}'] = pickle.dumps(self)
|
||||
|
||||
@staticmethod
|
||||
def sync_state(chatbot, llm_kwargs, cls, plugin_name, callback_fn, lock_plugin=True):
|
||||
state = chatbot._cookies.get(f'plugin_state/{plugin_name}', None)
|
||||
if state is not None:
|
||||
state = pickle.loads(state)
|
||||
else:
|
||||
state = cls()
|
||||
state.init_game(chatbot, lock_plugin)
|
||||
state.plugin_name = plugin_name
|
||||
state.llm_kwargs = llm_kwargs
|
||||
state.chatbot = chatbot
|
||||
state.callback_fn = callback_fn
|
||||
return state
|
||||
|
||||
def continue_game(self, prompt, chatbot, history):
|
||||
# 游戏主体
|
||||
yield from self.step(prompt, chatbot, history)
|
||||
self.step_cnt += 1
|
||||
# 保存状态,收尾
|
||||
self.dump_state(chatbot)
|
||||
# 如果游戏结束,清理
|
||||
if self.delete_game:
|
||||
chatbot._cookies['lock_plugin'] = None
|
||||
chatbot._cookies[f'plugin_state/{self.get_plugin_name()}'] = None
|
||||
yield from update_ui(chatbot=chatbot, history=history)
|
||||
126
crazy_functions/pdf_fns/breakdown_txt.py
Normal file
126
crazy_functions/pdf_fns/breakdown_txt.py
Normal file
@@ -0,0 +1,126 @@
|
||||
from crazy_functions.ipc_fns.mp import run_in_subprocess_with_timeout
|
||||
from loguru import logger
|
||||
|
||||
def force_breakdown(txt, limit, get_token_fn):
|
||||
""" 当无法用标点、空行分割时,我们用最暴力的方法切割
|
||||
"""
|
||||
for i in reversed(range(len(txt))):
|
||||
if get_token_fn(txt[:i]) < limit:
|
||||
return txt[:i], txt[i:]
|
||||
return "Tiktoken未知错误", "Tiktoken未知错误"
|
||||
|
||||
|
||||
def maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage):
|
||||
""" 为了加速计算,我们采样一个特殊的手段。当 remain_txt_to_cut > `_max` 时, 我们把 _max 后的文字转存至 remain_txt_to_cut_storage
|
||||
当 remain_txt_to_cut < `_min` 时,我们再把 remain_txt_to_cut_storage 中的部分文字取出
|
||||
"""
|
||||
_min = int(5e4)
|
||||
_max = int(1e5)
|
||||
# print(len(remain_txt_to_cut), len(remain_txt_to_cut_storage))
|
||||
if len(remain_txt_to_cut) < _min and len(remain_txt_to_cut_storage) > 0:
|
||||
remain_txt_to_cut = remain_txt_to_cut + remain_txt_to_cut_storage
|
||||
remain_txt_to_cut_storage = ""
|
||||
if len(remain_txt_to_cut) > _max:
|
||||
remain_txt_to_cut_storage = remain_txt_to_cut[_max:] + remain_txt_to_cut_storage
|
||||
remain_txt_to_cut = remain_txt_to_cut[:_max]
|
||||
return remain_txt_to_cut, remain_txt_to_cut_storage
|
||||
|
||||
|
||||
def cut(limit, get_token_fn, txt_tocut, must_break_at_empty_line, break_anyway=False):
|
||||
""" 文本切分
|
||||
"""
|
||||
res = []
|
||||
total_len = len(txt_tocut)
|
||||
fin_len = 0
|
||||
remain_txt_to_cut = txt_tocut
|
||||
remain_txt_to_cut_storage = ""
|
||||
# 为了加速计算,我们采样一个特殊的手段。当 remain_txt_to_cut > `_max` 时, 我们把 _max 后的文字转存至 remain_txt_to_cut_storage
|
||||
remain_txt_to_cut, remain_txt_to_cut_storage = maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage)
|
||||
|
||||
while True:
|
||||
if get_token_fn(remain_txt_to_cut) <= limit:
|
||||
# 如果剩余文本的token数小于限制,那么就不用切了
|
||||
res.append(remain_txt_to_cut); fin_len+=len(remain_txt_to_cut)
|
||||
break
|
||||
else:
|
||||
# 如果剩余文本的token数大于限制,那么就切
|
||||
lines = remain_txt_to_cut.split('\n')
|
||||
|
||||
# 估计一个切分点
|
||||
estimated_line_cut = limit / get_token_fn(remain_txt_to_cut) * len(lines)
|
||||
estimated_line_cut = int(estimated_line_cut)
|
||||
|
||||
# 开始查找合适切分点的偏移(cnt)
|
||||
cnt = 0
|
||||
for cnt in reversed(range(estimated_line_cut)):
|
||||
if must_break_at_empty_line:
|
||||
# 首先尝试用双空行(\n\n)作为切分点
|
||||
if lines[cnt] != "":
|
||||
continue
|
||||
prev = "\n".join(lines[:cnt])
|
||||
post = "\n".join(lines[cnt:])
|
||||
if get_token_fn(prev) < limit:
|
||||
break
|
||||
|
||||
if cnt == 0:
|
||||
# 如果没有找到合适的切分点
|
||||
if break_anyway:
|
||||
# 是否允许暴力切分
|
||||
prev, post = force_breakdown(remain_txt_to_cut, limit, get_token_fn)
|
||||
else:
|
||||
# 不允许直接报错
|
||||
raise RuntimeError(f"存在一行极长的文本!{remain_txt_to_cut}")
|
||||
|
||||
# 追加列表
|
||||
res.append(prev); fin_len+=len(prev)
|
||||
# 准备下一次迭代
|
||||
remain_txt_to_cut = post
|
||||
remain_txt_to_cut, remain_txt_to_cut_storage = maintain_storage(remain_txt_to_cut, remain_txt_to_cut_storage)
|
||||
process = fin_len/total_len
|
||||
logger.info(f'正在文本切分 {int(process*100)}%')
|
||||
if len(remain_txt_to_cut.strip()) == 0:
|
||||
break
|
||||
return res
|
||||
|
||||
|
||||
def breakdown_text_to_satisfy_token_limit_(txt, limit, llm_model="gpt-3.5-turbo"):
|
||||
""" 使用多种方式尝试切分文本,以满足 token 限制
|
||||
"""
|
||||
from request_llms.bridge_all import model_info
|
||||
enc = model_info[llm_model]['tokenizer']
|
||||
def get_token_fn(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||
try:
|
||||
# 第1次尝试,将双空行(\n\n)作为切分点
|
||||
return cut(limit, get_token_fn, txt, must_break_at_empty_line=True)
|
||||
except RuntimeError:
|
||||
try:
|
||||
# 第2次尝试,将单空行(\n)作为切分点
|
||||
return cut(limit, get_token_fn, txt, must_break_at_empty_line=False)
|
||||
except RuntimeError:
|
||||
try:
|
||||
# 第3次尝试,将英文句号(.)作为切分点
|
||||
res = cut(limit, get_token_fn, txt.replace('.', '。\n'), must_break_at_empty_line=False) # 这个中文的句号是故意的,作为一个标识而存在
|
||||
return [r.replace('。\n', '.') for r in res]
|
||||
except RuntimeError as e:
|
||||
try:
|
||||
# 第4次尝试,将中文句号(。)作为切分点
|
||||
res = cut(limit, get_token_fn, txt.replace('。', '。。\n'), must_break_at_empty_line=False)
|
||||
return [r.replace('。。\n', '。') for r in res]
|
||||
except RuntimeError as e:
|
||||
# 第5次尝试,没办法了,随便切一下吧
|
||||
return cut(limit, get_token_fn, txt, must_break_at_empty_line=False, break_anyway=True)
|
||||
|
||||
breakdown_text_to_satisfy_token_limit = run_in_subprocess_with_timeout(breakdown_text_to_satisfy_token_limit_, timeout=60)
|
||||
|
||||
if __name__ == '__main__':
|
||||
from crazy_functions.crazy_utils import read_and_clean_pdf_text
|
||||
file_content, page_one = read_and_clean_pdf_text("build/assets/at.pdf")
|
||||
|
||||
from request_llms.bridge_all import model_info
|
||||
for i in range(5):
|
||||
file_content += file_content
|
||||
|
||||
logger.info(len(file_content))
|
||||
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
||||
res = breakdown_text_to_satisfy_token_limit(file_content, TOKEN_LIMIT_PER_FRAGMENT)
|
||||
|
||||
171
crazy_functions/pdf_fns/parse_pdf.py
Normal file
171
crazy_functions/pdf_fns/parse_pdf.py
Normal file
@@ -0,0 +1,171 @@
|
||||
from functools import lru_cache
|
||||
from toolbox import gen_time_str
|
||||
from toolbox import promote_file_to_downloadzone
|
||||
from toolbox import write_history_to_file, promote_file_to_downloadzone
|
||||
from toolbox import get_conf
|
||||
from toolbox import ProxyNetworkActivate
|
||||
from shared_utils.colorful import *
|
||||
import requests
|
||||
import random
|
||||
import copy
|
||||
import os
|
||||
import math
|
||||
|
||||
class GROBID_OFFLINE_EXCEPTION(Exception): pass
|
||||
|
||||
def get_avail_grobid_url():
|
||||
GROBID_URLS = get_conf('GROBID_URLS')
|
||||
if len(GROBID_URLS) == 0: return None
|
||||
try:
|
||||
_grobid_url = random.choice(GROBID_URLS) # 随机负载均衡
|
||||
if _grobid_url.endswith('/'): _grobid_url = _grobid_url.rstrip('/')
|
||||
with ProxyNetworkActivate('Connect_Grobid'):
|
||||
res = requests.get(_grobid_url+'/api/isalive')
|
||||
if res.text=='true': return _grobid_url
|
||||
else: return None
|
||||
except:
|
||||
return None
|
||||
|
||||
@lru_cache(maxsize=32)
|
||||
def parse_pdf(pdf_path, grobid_url):
|
||||
import scipdf # pip install scipdf_parser
|
||||
if grobid_url.endswith('/'): grobid_url = grobid_url.rstrip('/')
|
||||
try:
|
||||
with ProxyNetworkActivate('Connect_Grobid'):
|
||||
article_dict = scipdf.parse_pdf_to_dict(pdf_path, grobid_url=grobid_url)
|
||||
except GROBID_OFFLINE_EXCEPTION:
|
||||
raise GROBID_OFFLINE_EXCEPTION("GROBID服务不可用,请修改config中的GROBID_URL,可修改成本地GROBID服务。")
|
||||
except:
|
||||
raise RuntimeError("解析PDF失败,请检查PDF是否损坏。")
|
||||
return article_dict
|
||||
|
||||
|
||||
def produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chatbot, fp, generated_conclusion_files):
|
||||
# -=-=-=-=-=-=-=-= 写出第1个文件:翻译前后混合 -=-=-=-=-=-=-=-=
|
||||
res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + gpt_response_collection, file_basename=f"{gen_time_str()}translated_and_original.md", file_fullname=None)
|
||||
promote_file_to_downloadzone(res_path, rename_file=os.path.basename(res_path)+'.md', chatbot=chatbot)
|
||||
generated_conclusion_files.append(res_path)
|
||||
|
||||
# -=-=-=-=-=-=-=-= 写出第2个文件:仅翻译后的文本 -=-=-=-=-=-=-=-=
|
||||
translated_res_array = []
|
||||
# 记录当前的大章节标题:
|
||||
last_section_name = ""
|
||||
for index, value in enumerate(gpt_response_collection):
|
||||
# 先挑选偶数序列号:
|
||||
if index % 2 != 0:
|
||||
# 先提取当前英文标题:
|
||||
cur_section_name = gpt_response_collection[index-1].split('\n')[0].split(" Part")[0]
|
||||
# 如果index是1的话,则直接使用first section name:
|
||||
if cur_section_name != last_section_name:
|
||||
cur_value = cur_section_name + '\n'
|
||||
last_section_name = copy.deepcopy(cur_section_name)
|
||||
else:
|
||||
cur_value = ""
|
||||
# 再做一个小修改:重新修改当前part的标题,默认用英文的
|
||||
cur_value += value
|
||||
translated_res_array.append(cur_value)
|
||||
res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + translated_res_array,
|
||||
file_basename = f"{gen_time_str()}-translated_only.md",
|
||||
file_fullname = None,
|
||||
auto_caption = False)
|
||||
promote_file_to_downloadzone(res_path, rename_file=os.path.basename(res_path)+'.md', chatbot=chatbot)
|
||||
generated_conclusion_files.append(res_path)
|
||||
return res_path
|
||||
|
||||
def translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG, plugin_kwargs={}):
|
||||
from crazy_functions.pdf_fns.report_gen_html import construct_html
|
||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
|
||||
prompt = "以下是一篇学术论文的基本信息:\n"
|
||||
# title
|
||||
title = article_dict.get('title', '无法获取 title'); prompt += f'title:{title}\n\n'
|
||||
# authors
|
||||
authors = article_dict.get('authors', '无法获取 authors')[:100]; prompt += f'authors:{authors}\n\n'
|
||||
# abstract
|
||||
abstract = article_dict.get('abstract', '无法获取 abstract'); prompt += f'abstract:{abstract}\n\n'
|
||||
# command
|
||||
prompt += f"请将题目和摘要翻译为{DST_LANG}。"
|
||||
meta = [f'# Title:\n\n', title, f'# Abstract:\n\n', abstract ]
|
||||
|
||||
# 单线,获取文章meta信息
|
||||
paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=prompt,
|
||||
inputs_show_user=prompt,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot, history=[],
|
||||
sys_prompt="You are an academic paper reader。",
|
||||
)
|
||||
|
||||
# 多线,翻译
|
||||
inputs_array = []
|
||||
inputs_show_user_array = []
|
||||
|
||||
# get_token_num
|
||||
from request_llms.bridge_all import model_info
|
||||
enc = model_info[llm_kwargs['llm_model']]['tokenizer']
|
||||
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
|
||||
|
||||
def break_down(txt):
|
||||
raw_token_num = get_token_num(txt)
|
||||
if raw_token_num <= TOKEN_LIMIT_PER_FRAGMENT:
|
||||
return [txt]
|
||||
else:
|
||||
# raw_token_num > TOKEN_LIMIT_PER_FRAGMENT
|
||||
# find a smooth token limit to achieve even separation
|
||||
count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT))
|
||||
token_limit_smooth = raw_token_num // count + count
|
||||
return breakdown_text_to_satisfy_token_limit(txt, limit=token_limit_smooth, llm_model=llm_kwargs['llm_model'])
|
||||
|
||||
for section in article_dict.get('sections'):
|
||||
if len(section['text']) == 0: continue
|
||||
section_frags = break_down(section['text'])
|
||||
for i, fragment in enumerate(section_frags):
|
||||
heading = section['heading']
|
||||
if len(section_frags) > 1: heading += f' Part-{i+1}'
|
||||
inputs_array.append(
|
||||
f"你需要翻译{heading}章节,内容如下: \n\n{fragment}"
|
||||
)
|
||||
inputs_show_user_array.append(
|
||||
f"# {heading}\n\n{fragment}"
|
||||
)
|
||||
|
||||
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array=inputs_array,
|
||||
inputs_show_user_array=inputs_show_user_array,
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history_array=[meta for _ in inputs_array],
|
||||
sys_prompt_array=[
|
||||
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" + plugin_kwargs.get("additional_prompt", "") for _ in inputs_array],
|
||||
)
|
||||
# -=-=-=-=-=-=-=-= 写出Markdown文件 -=-=-=-=-=-=-=-=
|
||||
produce_report_markdown(gpt_response_collection, meta, paper_meta_info, chatbot, fp, generated_conclusion_files)
|
||||
|
||||
# -=-=-=-=-=-=-=-= 写出HTML文件 -=-=-=-=-=-=-=-=
|
||||
ch = construct_html()
|
||||
orig = ""
|
||||
trans = ""
|
||||
gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
|
||||
for i,k in enumerate(gpt_response_collection_html):
|
||||
if i%2==0:
|
||||
gpt_response_collection_html[i] = inputs_show_user_array[i//2]
|
||||
else:
|
||||
# 先提取当前英文标题:
|
||||
cur_section_name = gpt_response_collection[i-1].split('\n')[0].split(" Part")[0]
|
||||
cur_value = cur_section_name + "\n" + gpt_response_collection_html[i]
|
||||
gpt_response_collection_html[i] = cur_value
|
||||
|
||||
final = ["", "", "一、论文概况", "", "Abstract", paper_meta_info, "二、论文翻译", ""]
|
||||
final.extend(gpt_response_collection_html)
|
||||
for i, k in enumerate(final):
|
||||
if i%2==0:
|
||||
orig = k
|
||||
if i%2==1:
|
||||
trans = k
|
||||
ch.add_row(a=orig, b=trans)
|
||||
create_report_file_name = f"{os.path.basename(fp)}.trans.html"
|
||||
html_file = ch.save_file(create_report_file_name)
|
||||
generated_conclusion_files.append(html_file)
|
||||
promote_file_to_downloadzone(html_file, rename_file=os.path.basename(html_file), chatbot=chatbot)
|
||||
26
crazy_functions/pdf_fns/parse_pdf_grobid.py
Normal file
26
crazy_functions/pdf_fns/parse_pdf_grobid.py
Normal file
@@ -0,0 +1,26 @@
|
||||
import os
|
||||
from toolbox import CatchException, report_exception, get_log_folder, gen_time_str, check_packages
|
||||
from toolbox import update_ui, promote_file_to_downloadzone, update_ui_latest_msg, disable_auto_promotion
|
||||
from toolbox import write_history_to_file, promote_file_to_downloadzone, get_conf, extract_archive
|
||||
from crazy_functions.pdf_fns.parse_pdf import parse_pdf, translate_pdf
|
||||
|
||||
def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url):
|
||||
import copy, json
|
||||
TOKEN_LIMIT_PER_FRAGMENT = 1024
|
||||
generated_conclusion_files = []
|
||||
generated_html_files = []
|
||||
DST_LANG = "中文"
|
||||
from crazy_functions.pdf_fns.report_gen_html import construct_html
|
||||
for index, fp in enumerate(file_manifest):
|
||||
chatbot.append(["当前进度:", f"正在连接GROBID服务,请稍候: {grobid_url}\n如果等待时间过长,请修改config中的GROBID_URL,可修改成本地GROBID服务。"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
article_dict = parse_pdf(fp, grobid_url)
|
||||
grobid_json_res = os.path.join(get_log_folder(), gen_time_str() + "grobid.json")
|
||||
with open(grobid_json_res, 'w+', encoding='utf8') as f:
|
||||
f.write(json.dumps(article_dict, indent=4, ensure_ascii=False))
|
||||
promote_file_to_downloadzone(grobid_json_res, chatbot=chatbot)
|
||||
if article_dict is None: raise RuntimeError("解析PDF失败,请检查PDF是否损坏。")
|
||||
yield from translate_pdf(article_dict, llm_kwargs, chatbot, fp, generated_conclusion_files, TOKEN_LIMIT_PER_FRAGMENT, DST_LANG, plugin_kwargs=plugin_kwargs)
|
||||
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
111
crazy_functions/pdf_fns/parse_pdf_legacy.py
Normal file
111
crazy_functions/pdf_fns/parse_pdf_legacy.py
Normal file
@@ -0,0 +1,111 @@
|
||||
from toolbox import get_log_folder
|
||||
from toolbox import update_ui, promote_file_to_downloadzone
|
||||
from toolbox import write_history_to_file, promote_file_to_downloadzone
|
||||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||
from crazy_functions.crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
||||
from crazy_functions.crazy_utils import read_and_clean_pdf_text
|
||||
from shared_utils.colorful import *
|
||||
from loguru import logger
|
||||
import os
|
||||
|
||||
def 解析PDF_简单拆解(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
||||
"""
|
||||
注意:此函数已经弃用!!新函数位于:crazy_functions/pdf_fns/parse_pdf.py
|
||||
"""
|
||||
import copy
|
||||
TOKEN_LIMIT_PER_FRAGMENT = 1024
|
||||
generated_conclusion_files = []
|
||||
generated_html_files = []
|
||||
from crazy_functions.pdf_fns.report_gen_html import construct_html
|
||||
for index, fp in enumerate(file_manifest):
|
||||
# 读取PDF文件
|
||||
file_content, page_one = read_and_clean_pdf_text(fp)
|
||||
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
|
||||
# 递归地切割PDF文件
|
||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
||||
paper_fragments = breakdown_text_to_satisfy_token_limit(txt=file_content, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
|
||||
page_one_fragments = breakdown_text_to_satisfy_token_limit(txt=page_one, limit=TOKEN_LIMIT_PER_FRAGMENT//4, llm_model=llm_kwargs['llm_model'])
|
||||
|
||||
# 为了更好的效果,我们剥离Introduction之后的部分(如果有)
|
||||
paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
|
||||
|
||||
# 单线,获取文章meta信息
|
||||
paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||||
inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取:{paper_meta}",
|
||||
inputs_show_user=f"请从{fp}中提取出“标题”、“收录会议或期刊”等基本信息。",
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot, history=[],
|
||||
sys_prompt="Your job is to collect information from materials。",
|
||||
)
|
||||
|
||||
# 多线,翻译
|
||||
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
||||
inputs_array=[
|
||||
f"你需要翻译以下内容:\n{frag}" for frag in paper_fragments],
|
||||
inputs_show_user_array=[f"\n---\n 原文: \n\n {frag.replace('#', '')} \n---\n 翻译:\n " for frag in paper_fragments],
|
||||
llm_kwargs=llm_kwargs,
|
||||
chatbot=chatbot,
|
||||
history_array=[[paper_meta] for _ in paper_fragments],
|
||||
sys_prompt_array=[
|
||||
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" + plugin_kwargs.get("additional_prompt", "")
|
||||
for _ in paper_fragments],
|
||||
# max_workers=5 # OpenAI所允许的最大并行过载
|
||||
)
|
||||
gpt_response_collection_md = copy.deepcopy(gpt_response_collection)
|
||||
# 整理报告的格式
|
||||
for i,k in enumerate(gpt_response_collection_md):
|
||||
if i%2==0:
|
||||
gpt_response_collection_md[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection_md)//2}]: \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection_md)//2}]:\n "
|
||||
else:
|
||||
gpt_response_collection_md[i] = gpt_response_collection_md[i]
|
||||
final = ["一、论文概况\n\n---\n\n", paper_meta_info.replace('# ', '### ') + '\n\n---\n\n', "二、论文翻译", ""]
|
||||
final.extend(gpt_response_collection_md)
|
||||
create_report_file_name = f"{os.path.basename(fp)}.trans.md"
|
||||
res = write_history_to_file(final, create_report_file_name)
|
||||
promote_file_to_downloadzone(res, chatbot=chatbot)
|
||||
|
||||
# 更新UI
|
||||
generated_conclusion_files.append(f'{get_log_folder()}/{create_report_file_name}')
|
||||
chatbot.append((f"{fp}完成了吗?", res))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# write html
|
||||
try:
|
||||
ch = construct_html()
|
||||
orig = ""
|
||||
trans = ""
|
||||
gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
|
||||
for i,k in enumerate(gpt_response_collection_html):
|
||||
if i%2==0:
|
||||
gpt_response_collection_html[i] = paper_fragments[i//2].replace('#', '')
|
||||
else:
|
||||
gpt_response_collection_html[i] = gpt_response_collection_html[i]
|
||||
final = ["论文概况", paper_meta_info.replace('# ', '### '), "二、论文翻译", ""]
|
||||
final.extend(gpt_response_collection_html)
|
||||
for i, k in enumerate(final):
|
||||
if i%2==0:
|
||||
orig = k
|
||||
if i%2==1:
|
||||
trans = k
|
||||
ch.add_row(a=orig, b=trans)
|
||||
create_report_file_name = f"{os.path.basename(fp)}.trans.html"
|
||||
generated_html_files.append(ch.save_file(create_report_file_name))
|
||||
except:
|
||||
from toolbox import trimmed_format_exc
|
||||
logger.error('writing html result failed:', trimmed_format_exc())
|
||||
|
||||
# 准备文件的下载
|
||||
for pdf_path in generated_conclusion_files:
|
||||
# 重命名文件
|
||||
rename_file = f'翻译-{os.path.basename(pdf_path)}'
|
||||
promote_file_to_downloadzone(pdf_path, rename_file=rename_file, chatbot=chatbot)
|
||||
for html_path in generated_html_files:
|
||||
# 重命名文件
|
||||
rename_file = f'翻译-{os.path.basename(html_path)}'
|
||||
promote_file_to_downloadzone(html_path, rename_file=rename_file, chatbot=chatbot)
|
||||
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
|
||||
335
crazy_functions/pdf_fns/parse_pdf_via_doc2x.py
Normal file
335
crazy_functions/pdf_fns/parse_pdf_via_doc2x.py
Normal file
@@ -0,0 +1,335 @@
|
||||
from toolbox import get_log_folder, gen_time_str, get_conf
|
||||
from toolbox import update_ui, promote_file_to_downloadzone
|
||||
from toolbox import promote_file_to_downloadzone, extract_archive
|
||||
from toolbox import generate_file_link, zip_folder
|
||||
from crazy_functions.crazy_utils import get_files_from_everything
|
||||
from shared_utils.colorful import *
|
||||
from loguru import logger
|
||||
import os
|
||||
import requests
|
||||
import time
|
||||
|
||||
|
||||
def retry_request(max_retries=3, delay=3):
|
||||
"""
|
||||
Decorator for retrying HTTP requests
|
||||
Args:
|
||||
max_retries: Maximum number of retry attempts
|
||||
delay: Delay between retries in seconds
|
||||
"""
|
||||
|
||||
def decorator(func):
|
||||
def wrapper(*args, **kwargs):
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
return func(*args, **kwargs)
|
||||
except Exception as e:
|
||||
if attempt < max_retries - 1:
|
||||
logger.error(
|
||||
f"Request failed, retrying... ({attempt + 1}/{max_retries}) Error: {e}"
|
||||
)
|
||||
time.sleep(delay)
|
||||
continue
|
||||
raise e
|
||||
return None
|
||||
|
||||
return wrapper
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
@retry_request()
|
||||
def make_request(method, url, **kwargs):
|
||||
"""
|
||||
Make HTTP request with retry mechanism
|
||||
"""
|
||||
return requests.request(method, url, **kwargs)
|
||||
|
||||
|
||||
def doc2x_api_response_status(response, uid=""):
|
||||
"""
|
||||
Check the status of Doc2x API response
|
||||
Args:
|
||||
response_data: Response object from Doc2x API
|
||||
"""
|
||||
response_json = response.json()
|
||||
response_data = response_json.get("data", {})
|
||||
code = response_json.get("code", "Unknown")
|
||||
meg = response_data.get("message", response_json)
|
||||
trace_id = response.headers.get("trace-id", "Failed to get trace-id")
|
||||
if response.status_code != 200:
|
||||
raise RuntimeError(
|
||||
f"Doc2x return an error:\nTrace ID: {trace_id} {uid}\n{response.status_code} - {response_json}"
|
||||
)
|
||||
if code in ["parse_page_limit_exceeded", "parse_concurrency_limit"]:
|
||||
raise RuntimeError(
|
||||
f"Reached the limit of Doc2x:\nTrace ID: {trace_id} {uid}\n{code} - {meg}"
|
||||
)
|
||||
if code not in ["ok", "success"]:
|
||||
raise RuntimeError(
|
||||
f"Doc2x return an error:\nTrace ID: {trace_id} {uid}\n{code} - {meg}"
|
||||
)
|
||||
return response_data
|
||||
|
||||
|
||||
def 解析PDF_DOC2X_转Latex(pdf_file_path):
|
||||
zip_file_path, unzipped_folder = 解析PDF_DOC2X(pdf_file_path, format="tex")
|
||||
return unzipped_folder
|
||||
|
||||
|
||||
def 解析PDF_DOC2X(pdf_file_path, format="tex"):
|
||||
"""
|
||||
format: 'tex', 'md', 'docx'
|
||||
"""
|
||||
|
||||
DOC2X_API_KEY = get_conf("DOC2X_API_KEY")
|
||||
latex_dir = get_log_folder(plugin_name="pdf_ocr_latex")
|
||||
markdown_dir = get_log_folder(plugin_name="pdf_ocr")
|
||||
doc2x_api_key = DOC2X_API_KEY
|
||||
|
||||
# < ------ 第1步:预上传获取URL,然后上传文件 ------ >
|
||||
logger.info("Doc2x 上传文件:预上传获取URL")
|
||||
res = make_request(
|
||||
"POST",
|
||||
"https://v2.doc2x.noedgeai.com/api/v2/parse/preupload",
|
||||
headers={"Authorization": "Bearer " + doc2x_api_key},
|
||||
timeout=15,
|
||||
)
|
||||
res_data = doc2x_api_response_status(res)
|
||||
upload_url = res_data["url"]
|
||||
uuid = res_data["uid"]
|
||||
|
||||
logger.info("Doc2x 上传文件:上传文件")
|
||||
with open(pdf_file_path, "rb") as file:
|
||||
res = make_request("PUT", upload_url, data=file, timeout=60)
|
||||
res.raise_for_status()
|
||||
|
||||
# < ------ 第2步:轮询等待 ------ >
|
||||
logger.info("Doc2x 处理文件中:轮询等待")
|
||||
params = {"uid": uuid}
|
||||
max_attempts = 60
|
||||
attempt = 0
|
||||
while attempt < max_attempts:
|
||||
res = make_request(
|
||||
"GET",
|
||||
"https://v2.doc2x.noedgeai.com/api/v2/parse/status",
|
||||
headers={"Authorization": "Bearer " + doc2x_api_key},
|
||||
params=params,
|
||||
timeout=15,
|
||||
)
|
||||
res_data = doc2x_api_response_status(res)
|
||||
if res_data["status"] == "success":
|
||||
break
|
||||
elif res_data["status"] == "processing":
|
||||
time.sleep(5)
|
||||
logger.info(f"Doc2x is processing at {res_data['progress']}%")
|
||||
attempt += 1
|
||||
else:
|
||||
raise RuntimeError(f"Doc2x return an error: {res_data}")
|
||||
if attempt >= max_attempts:
|
||||
raise RuntimeError("Doc2x processing timeout after maximum attempts")
|
||||
|
||||
# < ------ 第3步:提交转化 ------ >
|
||||
logger.info("Doc2x 第3步:提交转化")
|
||||
data = {
|
||||
"uid": uuid,
|
||||
"to": format,
|
||||
"formula_mode": "dollar",
|
||||
"filename": "output"
|
||||
}
|
||||
res = make_request(
|
||||
"POST",
|
||||
"https://v2.doc2x.noedgeai.com/api/v2/convert/parse",
|
||||
headers={"Authorization": "Bearer " + doc2x_api_key},
|
||||
json=data,
|
||||
timeout=15,
|
||||
)
|
||||
doc2x_api_response_status(res, uid=f"uid: {uuid}")
|
||||
|
||||
# < ------ 第4步:等待结果 ------ >
|
||||
logger.info("Doc2x 第4步:等待结果")
|
||||
params = {"uid": uuid}
|
||||
max_attempts = 36
|
||||
attempt = 0
|
||||
while attempt < max_attempts:
|
||||
res = make_request(
|
||||
"GET",
|
||||
"https://v2.doc2x.noedgeai.com/api/v2/convert/parse/result",
|
||||
headers={"Authorization": "Bearer " + doc2x_api_key},
|
||||
params=params,
|
||||
timeout=15,
|
||||
)
|
||||
res_data = doc2x_api_response_status(res, uid=f"uid: {uuid}")
|
||||
if res_data["status"] == "success":
|
||||
break
|
||||
elif res_data["status"] == "processing":
|
||||
time.sleep(3)
|
||||
logger.info("Doc2x still processing to convert file")
|
||||
attempt += 1
|
||||
if attempt >= max_attempts:
|
||||
raise RuntimeError("Doc2x conversion timeout after maximum attempts")
|
||||
|
||||
# < ------ 第5步:最后的处理 ------ >
|
||||
logger.info("Doc2x 第5步:下载转换后的文件")
|
||||
|
||||
if format == "tex":
|
||||
target_path = latex_dir
|
||||
if format == "md":
|
||||
target_path = markdown_dir
|
||||
os.makedirs(target_path, exist_ok=True)
|
||||
|
||||
max_attempt = 3
|
||||
# < ------ 下载 ------ >
|
||||
for attempt in range(max_attempt):
|
||||
try:
|
||||
result_url = res_data["url"]
|
||||
res = make_request("GET", result_url, timeout=60)
|
||||
zip_path = os.path.join(target_path, gen_time_str() + ".zip")
|
||||
unzip_path = os.path.join(target_path, gen_time_str())
|
||||
if res.status_code == 200:
|
||||
with open(zip_path, "wb") as f:
|
||||
f.write(res.content)
|
||||
else:
|
||||
raise RuntimeError(f"Doc2x return an error: {res.json()}")
|
||||
except Exception as e:
|
||||
if attempt < max_attempt - 1:
|
||||
logger.error(f"Failed to download uid = {uuid} file, retrying... {e}")
|
||||
time.sleep(3)
|
||||
continue
|
||||
else:
|
||||
raise e
|
||||
|
||||
# < ------ 解压 ------ >
|
||||
import zipfile
|
||||
with zipfile.ZipFile(zip_path, "r") as zip_ref:
|
||||
zip_ref.extractall(unzip_path)
|
||||
return zip_path, unzip_path
|
||||
|
||||
|
||||
def 解析PDF_DOC2X_单文件(
|
||||
fp,
|
||||
project_folder,
|
||||
llm_kwargs,
|
||||
plugin_kwargs,
|
||||
chatbot,
|
||||
history,
|
||||
system_prompt,
|
||||
DOC2X_API_KEY,
|
||||
user_request,
|
||||
):
|
||||
def pdf2markdown(filepath):
|
||||
chatbot.append((None, f"Doc2x 解析中"))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
md_zip_path, unzipped_folder = 解析PDF_DOC2X(filepath, format="md")
|
||||
|
||||
promote_file_to_downloadzone(md_zip_path, chatbot=chatbot)
|
||||
chatbot.append((None, f"完成解析 {md_zip_path} ..."))
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
return md_zip_path
|
||||
|
||||
def deliver_to_markdown_plugin(md_zip_path, user_request):
|
||||
from crazy_functions.Markdown_Translate import Markdown英译中
|
||||
import shutil, re
|
||||
|
||||
time_tag = gen_time_str()
|
||||
target_path_base = get_log_folder(chatbot.get_user())
|
||||
file_origin_name = os.path.basename(md_zip_path)
|
||||
this_file_path = os.path.join(target_path_base, file_origin_name)
|
||||
os.makedirs(target_path_base, exist_ok=True)
|
||||
shutil.copyfile(md_zip_path, this_file_path)
|
||||
ex_folder = this_file_path + ".extract"
|
||||
extract_archive(file_path=this_file_path, dest_dir=ex_folder)
|
||||
|
||||
# edit markdown files
|
||||
success, file_manifest, project_folder = get_files_from_everything(
|
||||
ex_folder, type=".md"
|
||||
)
|
||||
for generated_fp in file_manifest:
|
||||
# 修正一些公式问题
|
||||
with open(generated_fp, "r", encoding="utf8") as f:
|
||||
content = f.read()
|
||||
# 将公式中的\[ \]替换成$$
|
||||
content = content.replace(r"\[", r"$$").replace(r"\]", r"$$")
|
||||
# 将公式中的\( \)替换成$
|
||||
content = content.replace(r"\(", r"$").replace(r"\)", r"$")
|
||||
content = content.replace("```markdown", "\n").replace("```", "\n")
|
||||
with open(generated_fp, "w", encoding="utf8") as f:
|
||||
f.write(content)
|
||||
promote_file_to_downloadzone(generated_fp, chatbot=chatbot)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
# 生成在线预览html
|
||||
file_name = "在线预览翻译(原文)" + gen_time_str() + ".html"
|
||||
preview_fp = os.path.join(ex_folder, file_name)
|
||||
from shared_utils.advanced_markdown_format import (
|
||||
markdown_convertion_for_file,
|
||||
)
|
||||
|
||||
with open(generated_fp, "r", encoding="utf-8") as f:
|
||||
md = f.read()
|
||||
# # Markdown中使用不标准的表格,需要在表格前加上一个emoji,以便公式渲染
|
||||
# md = re.sub(r'^<table>', r'.<table>', md, flags=re.MULTILINE)
|
||||
html = markdown_convertion_for_file(md)
|
||||
with open(preview_fp, "w", encoding="utf-8") as f:
|
||||
f.write(html)
|
||||
chatbot.append([None, f"生成在线预览:{generate_file_link([preview_fp])}"])
|
||||
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
|
||||
|
||||
chatbot.append((None, f"调用Markdown插件 {ex_folder} ..."))
|
||||
plugin_kwargs["markdown_expected_output_dir"] = ex_folder
|
||||
|
||||
translated_f_name = "translated_markdown.md"
|
||||
generated_fp = plugin_kwargs["markdown_expected_output_path"] = os.path.join(
|
||||
ex_folder, translated_f_name
|
||||
)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
yield from Markdown英译中(
|
||||
ex_folder,
|
||||
llm_kwargs,
|
||||
plugin_kwargs,
|
||||
chatbot,
|
||||
history,
|
||||
system_prompt,
|
||||
user_request,
|
||||
)
|
||||
if os.path.exists(generated_fp):
|
||||
# 修正一些公式问题
|
||||
with open(generated_fp, "r", encoding="utf8") as f:
|
||||
content = f.read()
|
||||
content = content.replace("```markdown", "\n").replace("```", "\n")
|
||||
# Markdown中使用不标准的表格,需要在表格前加上一个emoji,以便公式渲染
|
||||
# content = re.sub(r'^<table>', r'.<table>', content, flags=re.MULTILINE)
|
||||
with open(generated_fp, "w", encoding="utf8") as f:
|
||||
f.write(content)
|
||||
# 生成在线预览html
|
||||
file_name = "在线预览翻译" + gen_time_str() + ".html"
|
||||
preview_fp = os.path.join(ex_folder, file_name)
|
||||
from shared_utils.advanced_markdown_format import (
|
||||
markdown_convertion_for_file,
|
||||
)
|
||||
|
||||
with open(generated_fp, "r", encoding="utf-8") as f:
|
||||
md = f.read()
|
||||
html = markdown_convertion_for_file(md)
|
||||
with open(preview_fp, "w", encoding="utf-8") as f:
|
||||
f.write(html)
|
||||
promote_file_to_downloadzone(preview_fp, chatbot=chatbot)
|
||||
# 生成包含图片的压缩包
|
||||
dest_folder = get_log_folder(chatbot.get_user())
|
||||
zip_name = "翻译后的带图文档.zip"
|
||||
zip_folder(
|
||||
source_folder=ex_folder, dest_folder=dest_folder, zip_name=zip_name
|
||||
)
|
||||
zip_fp = os.path.join(dest_folder, zip_name)
|
||||
promote_file_to_downloadzone(zip_fp, chatbot=chatbot)
|
||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||
|
||||
md_zip_path = yield from pdf2markdown(fp)
|
||||
yield from deliver_to_markdown_plugin(md_zip_path, user_request)
|
||||
|
||||
|
||||
def 解析PDF_基于DOC2X(file_manifest, *args):
|
||||
for index, fp in enumerate(file_manifest):
|
||||
yield from 解析PDF_DOC2X_单文件(fp, *args)
|
||||
return
|
||||
85
crazy_functions/pdf_fns/parse_word.py
Normal file
85
crazy_functions/pdf_fns/parse_word.py
Normal file
@@ -0,0 +1,85 @@
|
||||
from crazy_functions.crazy_utils import read_and_clean_pdf_text, get_files_from_everything
|
||||
import os
|
||||
import re
|
||||
def extract_text_from_files(txt, chatbot, history):
|
||||
"""
|
||||
查找pdf/md/word并获取文本内容并返回状态以及文本
|
||||
|
||||
输入参数 Args:
|
||||
chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化)
|
||||
history (list): List of chat history (历史,对话历史列表)
|
||||
|
||||
输出 Returns:
|
||||
文件是否存在(bool)
|
||||
final_result(list):文本内容
|
||||
page_one(list):第一页内容/摘要
|
||||
file_manifest(list):文件路径
|
||||
exception(string):需要用户手动处理的信息,如没出错则保持为空
|
||||
"""
|
||||
|
||||
final_result = []
|
||||
page_one = []
|
||||
file_manifest = []
|
||||
exception = ""
|
||||
|
||||
if txt == "":
|
||||
final_result.append(txt)
|
||||
return False, final_result, page_one, file_manifest, exception #如输入区内容不是文件则直接返回输入区内容
|
||||
|
||||
#查找输入区内容中的文件
|
||||
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
|
||||
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md')
|
||||
file_word,word_manifest,folder_word = get_files_from_everything(txt, '.docx')
|
||||
file_doc,doc_manifest,folder_doc = get_files_from_everything(txt, '.doc')
|
||||
|
||||
if file_doc:
|
||||
exception = "word"
|
||||
return False, final_result, page_one, file_manifest, exception
|
||||
|
||||
file_num = len(pdf_manifest) + len(md_manifest) + len(word_manifest)
|
||||
if file_num == 0:
|
||||
final_result.append(txt)
|
||||
return False, final_result, page_one, file_manifest, exception #如输入区内容不是文件则直接返回输入区内容
|
||||
|
||||
if file_pdf:
|
||||
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
import fitz
|
||||
except:
|
||||
exception = "pdf"
|
||||
return False, final_result, page_one, file_manifest, exception
|
||||
for index, fp in enumerate(pdf_manifest):
|
||||
file_content, pdf_one = read_and_clean_pdf_text(fp) # (尝试)按照章节切割PDF
|
||||
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
pdf_one = str(pdf_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||
final_result.append(file_content)
|
||||
page_one.append(pdf_one)
|
||||
file_manifest.append(os.path.relpath(fp, folder_pdf))
|
||||
|
||||
if file_md:
|
||||
for index, fp in enumerate(md_manifest):
|
||||
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
|
||||
file_content = f.read()
|
||||
file_content = file_content.encode('utf-8', 'ignore').decode()
|
||||
headers = re.findall(r'^#\s(.*)$', file_content, re.MULTILINE) #接下来提取md中的一级/二级标题作为摘要
|
||||
if len(headers) > 0:
|
||||
page_one.append("\n".join(headers)) #合并所有的标题,以换行符分割
|
||||
else:
|
||||
page_one.append("")
|
||||
final_result.append(file_content)
|
||||
file_manifest.append(os.path.relpath(fp, folder_md))
|
||||
|
||||
if file_word:
|
||||
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||
from docx import Document
|
||||
except:
|
||||
exception = "word_pip"
|
||||
return False, final_result, page_one, file_manifest, exception
|
||||
for index, fp in enumerate(word_manifest):
|
||||
doc = Document(fp)
|
||||
file_content = '\n'.join([p.text for p in doc.paragraphs])
|
||||
file_content = file_content.encode('utf-8', 'ignore').decode()
|
||||
page_one.append(file_content[:200])
|
||||
final_result.append(file_content)
|
||||
file_manifest.append(os.path.relpath(fp, folder_word))
|
||||
|
||||
return True, final_result, page_one, file_manifest, exception
|
||||
58
crazy_functions/pdf_fns/report_gen_html.py
Normal file
58
crazy_functions/pdf_fns/report_gen_html.py
Normal file
@@ -0,0 +1,58 @@
|
||||
from toolbox import update_ui, get_conf, trimmed_format_exc, get_log_folder
|
||||
import os
|
||||
|
||||
|
||||
|
||||
|
||||
class construct_html():
|
||||
def __init__(self) -> None:
|
||||
self.html_string = ""
|
||||
|
||||
def add_row(self, a, b):
|
||||
from toolbox import markdown_convertion
|
||||
template = """
|
||||
{
|
||||
primary_col: {
|
||||
header: String.raw`__PRIMARY_HEADER__`,
|
||||
msg: String.raw`__PRIMARY_MSG__`,
|
||||
},
|
||||
secondary_rol: {
|
||||
header: String.raw`__SECONDARY_HEADER__`,
|
||||
msg: String.raw`__SECONDARY_MSG__`,
|
||||
}
|
||||
},
|
||||
"""
|
||||
def std(str):
|
||||
str = str.replace(r'`',r'`')
|
||||
if str.endswith("\\"): str += ' '
|
||||
if str.endswith("}"): str += ' '
|
||||
if str.endswith("$"): str += ' '
|
||||
return str
|
||||
|
||||
template_ = template
|
||||
a_lines = a.split('\n')
|
||||
b_lines = b.split('\n')
|
||||
|
||||
if len(a_lines) == 1 or len(a_lines[0]) > 50:
|
||||
template_ = template_.replace("__PRIMARY_HEADER__", std(a[:20]))
|
||||
template_ = template_.replace("__PRIMARY_MSG__", std(markdown_convertion(a)))
|
||||
else:
|
||||
template_ = template_.replace("__PRIMARY_HEADER__", std(a_lines[0]))
|
||||
template_ = template_.replace("__PRIMARY_MSG__", std(markdown_convertion('\n'.join(a_lines[1:]))))
|
||||
|
||||
if len(b_lines) == 1 or len(b_lines[0]) > 50:
|
||||
template_ = template_.replace("__SECONDARY_HEADER__", std(b[:20]))
|
||||
template_ = template_.replace("__SECONDARY_MSG__", std(markdown_convertion(b)))
|
||||
else:
|
||||
template_ = template_.replace("__SECONDARY_HEADER__", std(b_lines[0]))
|
||||
template_ = template_.replace("__SECONDARY_MSG__", std(markdown_convertion('\n'.join(b_lines[1:]))))
|
||||
self.html_string += template_
|
||||
|
||||
def save_file(self, file_name):
|
||||
from toolbox import get_log_folder
|
||||
with open('crazy_functions/pdf_fns/report_template.html', 'r', encoding='utf8') as f:
|
||||
html_template = f.read()
|
||||
html_template = html_template.replace("__TF_ARR__", self.html_string)
|
||||
with open(os.path.join(get_log_folder(), file_name), 'w', encoding='utf8') as f:
|
||||
f.write(html_template.encode('utf-8', 'ignore').decode())
|
||||
return os.path.join(get_log_folder(), file_name)
|
||||
104
crazy_functions/pdf_fns/report_template.html
Normal file
104
crazy_functions/pdf_fns/report_template.html
Normal file
File diff suppressed because one or more lines are too long
73
crazy_functions/pdf_fns/report_template_v2.html
Normal file
73
crazy_functions/pdf_fns/report_template_v2.html
Normal file
@@ -0,0 +1,73 @@
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
|
||||
<head>
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
|
||||
<title>GPT-Academic 翻译报告书</title>
|
||||
<style>
|
||||
.centered-a {
|
||||
color: red;
|
||||
text-align: center;
|
||||
margin-bottom: 2%;
|
||||
font-size: 1.5em;
|
||||
}
|
||||
.centered-b {
|
||||
color: red;
|
||||
text-align: center;
|
||||
margin-top: 10%;
|
||||
margin-bottom: 20%;
|
||||
font-size: 1.5em;
|
||||
}
|
||||
.centered-c {
|
||||
color: rgba(255, 0, 0, 0);
|
||||
text-align: center;
|
||||
margin-top: 2%;
|
||||
margin-bottom: 20%;
|
||||
font-size: 7em;
|
||||
}
|
||||
</style>
|
||||
<script>
|
||||
// Configure MathJax settings
|
||||
MathJax = {
|
||||
tex: {
|
||||
inlineMath: [
|
||||
['$', '$'],
|
||||
['\(', '\)']
|
||||
]
|
||||
}
|
||||
}
|
||||
addEventListener('zero-md-rendered', () => {MathJax.typeset(); console.log('MathJax typeset!');})
|
||||
</script>
|
||||
<!-- Load MathJax library -->
|
||||
<script src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js"></script>
|
||||
<script
|
||||
type="module"
|
||||
src="https://cdn.jsdelivr.net/gh/zerodevx/zero-md@2/dist/zero-md.min.js"
|
||||
></script>
|
||||
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<div class="test_temp1" style="width:10%; height: 500px; float:left;">
|
||||
|
||||
</div>
|
||||
<div class="test_temp2" style="width:80%; height: 500px; float:left;">
|
||||
<!-- Simply set the `src` attribute to your MD file and win -->
|
||||
<div class="centered-a">
|
||||
请按Ctrl+S保存此页面,否则该页面可能在几分钟后失效。
|
||||
</div>
|
||||
<zero-md src="translated_markdown.md" no-shadow>
|
||||
</zero-md>
|
||||
<div class="centered-b">
|
||||
本报告由GPT-Academic开源项目生成,地址:https://github.com/binary-husky/gpt_academic。
|
||||
</div>
|
||||
<div class="centered-c">
|
||||
本报告由GPT-Academic开源项目生成,地址:https://github.com/binary-husky/gpt_academic。
|
||||
</div>
|
||||
</div>
|
||||
<div class="test_temp3" style="width:10%; height: 500px; float:left;">
|
||||
</div>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
||||
52
crazy_functions/plugin_template/plugin_class_template.py
Normal file
52
crazy_functions/plugin_template/plugin_class_template.py
Normal file
@@ -0,0 +1,52 @@
|
||||
import os, json, base64
|
||||
from pydantic import BaseModel, Field
|
||||
from textwrap import dedent
|
||||
from typing import List
|
||||
|
||||
class ArgProperty(BaseModel): # PLUGIN_ARG_MENU
|
||||
title: str = Field(description="The title", default="")
|
||||
description: str = Field(description="The description", default="")
|
||||
default_value: str = Field(description="The default value", default="")
|
||||
type: str = Field(description="The type", default="") # currently we support ['string', 'dropdown']
|
||||
options: List[str] = Field(default=[], description="List of options available for the argument") # only used when type is 'dropdown'
|
||||
|
||||
class GptAcademicPluginTemplate():
|
||||
def __init__(self):
|
||||
# please note that `execute` method may run in different threads,
|
||||
# thus you should not store any state in the plugin instance,
|
||||
# which may be accessed by multiple threads
|
||||
pass
|
||||
|
||||
|
||||
def define_arg_selection_menu(self):
|
||||
"""
|
||||
An example as below:
|
||||
```
|
||||
def define_arg_selection_menu(self):
|
||||
gui_definition = {
|
||||
"main_input":
|
||||
ArgProperty(title="main input", description="description", default_value="default_value", type="string").model_dump_json(),
|
||||
"advanced_arg":
|
||||
ArgProperty(title="advanced arguments", description="description", default_value="default_value", type="string").model_dump_json(),
|
||||
"additional_arg_01":
|
||||
ArgProperty(title="additional", description="description", default_value="default_value", type="string").model_dump_json(),
|
||||
}
|
||||
return gui_definition
|
||||
```
|
||||
"""
|
||||
raise NotImplementedError("You need to implement this method in your plugin class")
|
||||
|
||||
|
||||
def get_js_code_for_generating_menu(self, btnName):
|
||||
define_arg_selection = self.define_arg_selection_menu()
|
||||
|
||||
if len(define_arg_selection.keys()) > 8:
|
||||
raise ValueError("You can only have up to 8 arguments in the define_arg_selection")
|
||||
# if "main_input" not in define_arg_selection:
|
||||
# raise ValueError("You must have a 'main_input' in the define_arg_selection")
|
||||
|
||||
DEFINE_ARG_INPUT_INTERFACE = json.dumps(define_arg_selection)
|
||||
return base64.b64encode(DEFINE_ARG_INPUT_INTERFACE.encode('utf-8')).decode('utf-8')
|
||||
|
||||
def execute(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||
raise NotImplementedError("You need to implement this method in your plugin class")
|
||||
87
crazy_functions/prompts/internet.py
Normal file
87
crazy_functions/prompts/internet.py
Normal file
@@ -0,0 +1,87 @@
|
||||
SearchOptimizerPrompt="""作为一个网页搜索助手,你的任务是结合历史记录,从不同角度,为“原问题”生成个不同版本的“检索词”,从而提高网页检索的精度。生成的问题要求指向对象清晰明确,并与“原问题语言相同”。例如:
|
||||
历史记录:
|
||||
"
|
||||
Q: 对话背景。
|
||||
A: 当前对话是关于 Nginx 的介绍和在Ubuntu上的使用等。
|
||||
"
|
||||
原问题: 怎么下载
|
||||
检索词: ["Nginx 下载","Ubuntu Nginx","Ubuntu安装Nginx"]
|
||||
----------------
|
||||
历史记录:
|
||||
"
|
||||
Q: 对话背景。
|
||||
A: 当前对话是关于 Nginx 的介绍和使用等。
|
||||
Q: 报错 "no connection"
|
||||
A: 报错"no connection"可能是因为……
|
||||
"
|
||||
原问题: 怎么解决
|
||||
检索词: ["Nginx报错"no connection" 解决","Nginx'no connection'报错 原因","Nginx提示'no connection'"]
|
||||
----------------
|
||||
历史记录:
|
||||
"
|
||||
|
||||
"
|
||||
原问题: 你知道 Python 么?
|
||||
检索词: ["Python","Python 使用教程。","Python 特点和优势"]
|
||||
----------------
|
||||
历史记录:
|
||||
"
|
||||
Q: 列出Java的三种特点?
|
||||
A: 1. Java 是一种编译型语言。
|
||||
2. Java 是一种面向对象的编程语言。
|
||||
3. Java 是一种跨平台的编程语言。
|
||||
"
|
||||
原问题: 介绍下第2点。
|
||||
检索词: ["Java 面向对象特点","Java 面向对象编程优势。","Java 面向对象编程"]
|
||||
----------------
|
||||
现在有历史记录:
|
||||
"
|
||||
{history}
|
||||
"
|
||||
有其原问题: {query}
|
||||
直接给出最多{num}个检索词,必须以json形式给出,不得有多余字符:
|
||||
"""
|
||||
|
||||
SearchAcademicOptimizerPrompt="""作为一个学术论文搜索助手,你的任务是结合历史记录,从不同角度,为“原问题”生成个不同版本的“检索词”,从而提高学术论文检索的精度。生成的问题要求指向对象清晰明确,并与“原问题语言相同”。例如:
|
||||
历史记录:
|
||||
"
|
||||
Q: 对话背景。
|
||||
A: 当前对话是关于深度学习的介绍和在图像识别中的应用等。
|
||||
"
|
||||
原问题: 怎么下载相关论文
|
||||
检索词: ["深度学习 图像识别 论文下载","图像识别 深度学习 研究论文","深度学习 图像识别 论文资源","Deep Learning Image Recognition Paper Download","Image Recognition Deep Learning Research Paper"]
|
||||
----------------
|
||||
历史记录:
|
||||
"
|
||||
Q: 对话背景。
|
||||
A: 当前对话是关于深度学习的介绍和应用等。
|
||||
Q: 报错 "模型不收敛"
|
||||
A: 报错"模型不收敛"可能是因为……
|
||||
"
|
||||
原问题: 怎么解决
|
||||
检索词: ["深度学习 模型不收敛 解决方案 论文","深度学习 模型不收敛 原因 研究","深度学习 模型不收敛 论文","Deep Learning Model Convergence Issue Solution Paper","Deep Learning Model Convergence Problem Research"]
|
||||
----------------
|
||||
历史记录:
|
||||
"
|
||||
|
||||
"
|
||||
原问题: 你知道 GAN 么?
|
||||
检索词: ["生成对抗网络 论文","GAN 使用教程 论文","GAN 特点和优势 研究","Generative Adversarial Network Paper","GAN Usage Tutorial Paper"]
|
||||
----------------
|
||||
历史记录:
|
||||
"
|
||||
Q: 列出机器学习的三种应用?
|
||||
A: 1. 机器学习在图像识别中的应用。
|
||||
2. 机器学习在自然语言处理中的应用。
|
||||
3. 机器学习在推荐系统中的应用。
|
||||
"
|
||||
原问题: 介绍下第2点。
|
||||
检索词: ["机器学习 自然语言处理 应用 论文","机器学习 自然语言处理 研究","机器学习 NLP 应用 论文","Machine Learning Natural Language Processing Application Paper","Machine Learning NLP Research"]
|
||||
----------------
|
||||
现在有历史记录:
|
||||
"
|
||||
{history}
|
||||
"
|
||||
有其原问题: {query}
|
||||
直接给出最多{num}个检索词,必须以json形式给出,不得有多余字符:
|
||||
"""
|
||||
138
crazy_functions/rag_fns/llama_index_worker.py
Normal file
138
crazy_functions/rag_fns/llama_index_worker.py
Normal file
@@ -0,0 +1,138 @@
|
||||
import atexit
|
||||
from loguru import logger
|
||||
from typing import List
|
||||
|
||||
from llama_index.core import Document
|
||||
from llama_index.core.ingestion import run_transformations
|
||||
from llama_index.core.schema import TextNode
|
||||
|
||||
from crazy_functions.rag_fns.vector_store_index import GptacVectorStoreIndex
|
||||
from request_llms.embed_models.openai_embed import OpenAiEmbeddingModel
|
||||
|
||||
DEFAULT_QUERY_GENERATION_PROMPT = """\
|
||||
Now, you have context information as below:
|
||||
---------------------
|
||||
{context_str}
|
||||
---------------------
|
||||
Answer the user request below (use the context information if necessary, otherwise you can ignore them):
|
||||
---------------------
|
||||
{query_str}
|
||||
"""
|
||||
|
||||
QUESTION_ANSWER_RECORD = """\
|
||||
{{
|
||||
"type": "This is a previous conversation with the user",
|
||||
"question": "{question}",
|
||||
"answer": "{answer}",
|
||||
}}
|
||||
"""
|
||||
|
||||
|
||||
class SaveLoad():
|
||||
|
||||
def does_checkpoint_exist(self, checkpoint_dir=None):
|
||||
import os, glob
|
||||
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
|
||||
if not os.path.exists(checkpoint_dir): return False
|
||||
if len(glob.glob(os.path.join(checkpoint_dir, "*.json"))) == 0: return False
|
||||
return True
|
||||
|
||||
def save_to_checkpoint(self, checkpoint_dir=None):
|
||||
logger.info(f'saving vector store to: {checkpoint_dir}')
|
||||
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
|
||||
self.vs_index.storage_context.persist(persist_dir=checkpoint_dir)
|
||||
|
||||
def load_from_checkpoint(self, checkpoint_dir=None):
|
||||
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
|
||||
if self.does_checkpoint_exist(checkpoint_dir=checkpoint_dir):
|
||||
logger.info('loading checkpoint from disk')
|
||||
from llama_index.core import StorageContext, load_index_from_storage
|
||||
storage_context = StorageContext.from_defaults(persist_dir=checkpoint_dir)
|
||||
self.vs_index = load_index_from_storage(storage_context, embed_model=self.embed_model)
|
||||
return self.vs_index
|
||||
else:
|
||||
return self.create_new_vs()
|
||||
|
||||
def create_new_vs(self):
|
||||
return GptacVectorStoreIndex.default_vector_store(embed_model=self.embed_model)
|
||||
|
||||
def purge(self):
|
||||
import shutil
|
||||
shutil.rmtree(self.checkpoint_dir, ignore_errors=True)
|
||||
self.vs_index = self.create_new_vs(self.checkpoint_dir)
|
||||
|
||||
|
||||
class LlamaIndexRagWorker(SaveLoad):
|
||||
def __init__(self, user_name, llm_kwargs, auto_load_checkpoint=True, checkpoint_dir=None) -> None:
|
||||
self.debug_mode = True
|
||||
self.embed_model = OpenAiEmbeddingModel(llm_kwargs)
|
||||
self.user_name = user_name
|
||||
self.checkpoint_dir = checkpoint_dir
|
||||
if auto_load_checkpoint:
|
||||
self.vs_index = self.load_from_checkpoint(checkpoint_dir)
|
||||
else:
|
||||
self.vs_index = self.create_new_vs()
|
||||
atexit.register(lambda: self.save_to_checkpoint(checkpoint_dir))
|
||||
|
||||
def assign_embedding_model(self):
|
||||
pass
|
||||
|
||||
def inspect_vector_store(self):
|
||||
# This function is for debugging
|
||||
self.vs_index.storage_context.index_store.to_dict()
|
||||
docstore = self.vs_index.storage_context.docstore.docs
|
||||
vector_store_preview = "\n".join([ f"{_id} | {tn.text}" for _id, tn in docstore.items() ])
|
||||
logger.info('\n++ --------inspect_vector_store begin--------')
|
||||
logger.info(vector_store_preview)
|
||||
logger.info('oo --------inspect_vector_store end--------')
|
||||
return vector_store_preview
|
||||
|
||||
def add_documents_to_vector_store(self, document_list: List[Document]):
|
||||
"""
|
||||
Adds a list of Document objects to the vector store after processing.
|
||||
"""
|
||||
documents = document_list
|
||||
documents_nodes = run_transformations(
|
||||
documents, # type: ignore
|
||||
self.vs_index._transformations,
|
||||
show_progress=True
|
||||
)
|
||||
self.vs_index.insert_nodes(documents_nodes)
|
||||
if self.debug_mode:
|
||||
self.inspect_vector_store()
|
||||
|
||||
def add_text_to_vector_store(self, text: str):
|
||||
node = TextNode(text=text)
|
||||
documents_nodes = run_transformations(
|
||||
[node],
|
||||
self.vs_index._transformations,
|
||||
show_progress=True
|
||||
)
|
||||
self.vs_index.insert_nodes(documents_nodes)
|
||||
if self.debug_mode:
|
||||
self.inspect_vector_store()
|
||||
|
||||
def remember_qa(self, question, answer):
|
||||
formatted_str = QUESTION_ANSWER_RECORD.format(question=question, answer=answer)
|
||||
self.add_text_to_vector_store(formatted_str)
|
||||
|
||||
def retrieve_from_store_with_query(self, query):
|
||||
if self.debug_mode:
|
||||
self.inspect_vector_store()
|
||||
retriever = self.vs_index.as_retriever()
|
||||
return retriever.retrieve(query)
|
||||
|
||||
def build_prompt(self, query, nodes):
|
||||
context_str = self.generate_node_array_preview(nodes)
|
||||
return DEFAULT_QUERY_GENERATION_PROMPT.format(context_str=context_str, query_str=query)
|
||||
|
||||
def generate_node_array_preview(self, nodes):
|
||||
buf = "\n".join(([f"(No.{i+1} | score {n.score:.3f}): {n.text}" for i, n in enumerate(nodes)]))
|
||||
if self.debug_mode: logger.info(buf)
|
||||
return buf
|
||||
|
||||
def purge_vector_store(self):
|
||||
"""
|
||||
Purges the current vector store and creates a new one.
|
||||
"""
|
||||
self.purge()
|
||||
108
crazy_functions/rag_fns/milvus_worker.py
Normal file
108
crazy_functions/rag_fns/milvus_worker.py
Normal file
@@ -0,0 +1,108 @@
|
||||
import llama_index
|
||||
import os
|
||||
import atexit
|
||||
from typing import List
|
||||
from loguru import logger
|
||||
from llama_index.core import Document
|
||||
from llama_index.core.schema import TextNode
|
||||
from request_llms.embed_models.openai_embed import OpenAiEmbeddingModel
|
||||
from shared_utils.connect_void_terminal import get_chat_default_kwargs
|
||||
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
|
||||
from crazy_functions.rag_fns.vector_store_index import GptacVectorStoreIndex
|
||||
from llama_index.core.ingestion import run_transformations
|
||||
from llama_index.core import PromptTemplate
|
||||
from llama_index.core.response_synthesizers import TreeSummarize
|
||||
from llama_index.core import StorageContext
|
||||
from llama_index.vector_stores.milvus import MilvusVectorStore
|
||||
from crazy_functions.rag_fns.llama_index_worker import LlamaIndexRagWorker
|
||||
|
||||
DEFAULT_QUERY_GENERATION_PROMPT = """\
|
||||
Now, you have context information as below:
|
||||
---------------------
|
||||
{context_str}
|
||||
---------------------
|
||||
Answer the user request below (use the context information if necessary, otherwise you can ignore them):
|
||||
---------------------
|
||||
{query_str}
|
||||
"""
|
||||
|
||||
QUESTION_ANSWER_RECORD = """\
|
||||
{{
|
||||
"type": "This is a previous conversation with the user",
|
||||
"question": "{question}",
|
||||
"answer": "{answer}",
|
||||
}}
|
||||
"""
|
||||
|
||||
|
||||
class MilvusSaveLoad():
|
||||
|
||||
def does_checkpoint_exist(self, checkpoint_dir=None):
|
||||
import os, glob
|
||||
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
|
||||
if not os.path.exists(checkpoint_dir): return False
|
||||
if len(glob.glob(os.path.join(checkpoint_dir, "*.json"))) == 0: return False
|
||||
return True
|
||||
|
||||
def save_to_checkpoint(self, checkpoint_dir=None):
|
||||
logger.info(f'saving vector store to: {checkpoint_dir}')
|
||||
# if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
|
||||
# self.vs_index.storage_context.persist(persist_dir=checkpoint_dir)
|
||||
|
||||
def load_from_checkpoint(self, checkpoint_dir=None):
|
||||
if checkpoint_dir is None: checkpoint_dir = self.checkpoint_dir
|
||||
if self.does_checkpoint_exist(checkpoint_dir=checkpoint_dir):
|
||||
logger.info('loading checkpoint from disk')
|
||||
from llama_index.core import StorageContext, load_index_from_storage
|
||||
storage_context = StorageContext.from_defaults(persist_dir=checkpoint_dir)
|
||||
try:
|
||||
self.vs_index = load_index_from_storage(storage_context, embed_model=self.embed_model)
|
||||
return self.vs_index
|
||||
except:
|
||||
return self.create_new_vs(checkpoint_dir)
|
||||
else:
|
||||
return self.create_new_vs(checkpoint_dir)
|
||||
|
||||
def create_new_vs(self, checkpoint_dir, overwrite=False):
|
||||
vector_store = MilvusVectorStore(
|
||||
uri=os.path.join(checkpoint_dir, "milvus_demo.db"),
|
||||
dim=self.embed_model.embedding_dimension(),
|
||||
overwrite=overwrite
|
||||
)
|
||||
storage_context = StorageContext.from_defaults(vector_store=vector_store)
|
||||
index = GptacVectorStoreIndex.default_vector_store(storage_context=storage_context, embed_model=self.embed_model)
|
||||
return index
|
||||
|
||||
def purge(self):
|
||||
self.vs_index = self.create_new_vs(self.checkpoint_dir, overwrite=True)
|
||||
|
||||
class MilvusRagWorker(MilvusSaveLoad, LlamaIndexRagWorker):
|
||||
|
||||
def __init__(self, user_name, llm_kwargs, auto_load_checkpoint=True, checkpoint_dir=None) -> None:
|
||||
self.debug_mode = True
|
||||
self.embed_model = OpenAiEmbeddingModel(llm_kwargs)
|
||||
self.user_name = user_name
|
||||
self.checkpoint_dir = checkpoint_dir
|
||||
if auto_load_checkpoint:
|
||||
self.vs_index = self.load_from_checkpoint(checkpoint_dir)
|
||||
else:
|
||||
self.vs_index = self.create_new_vs(checkpoint_dir)
|
||||
atexit.register(lambda: self.save_to_checkpoint(checkpoint_dir))
|
||||
|
||||
def inspect_vector_store(self):
|
||||
# This function is for debugging
|
||||
try:
|
||||
self.vs_index.storage_context.index_store.to_dict()
|
||||
docstore = self.vs_index.storage_context.docstore.docs
|
||||
if not docstore.items():
|
||||
raise ValueError("cannot inspect")
|
||||
vector_store_preview = "\n".join([ f"{_id} | {tn.text}" for _id, tn in docstore.items() ])
|
||||
except:
|
||||
dummy_retrieve_res: List["NodeWithScore"] = self.vs_index.as_retriever().retrieve(' ')
|
||||
vector_store_preview = "\n".join(
|
||||
[f"{node.id_} | {node.text}" for node in dummy_retrieve_res]
|
||||
)
|
||||
logger.info('\n++ --------inspect_vector_store begin--------')
|
||||
logger.info(vector_store_preview)
|
||||
logger.info('oo --------inspect_vector_store end--------')
|
||||
return vector_store_preview
|
||||
22
crazy_functions/rag_fns/rag_file_support.py
Normal file
22
crazy_functions/rag_fns/rag_file_support.py
Normal file
@@ -0,0 +1,22 @@
|
||||
import os
|
||||
from llama_index.core import SimpleDirectoryReader
|
||||
|
||||
supports_format = ['.csv', '.docx', '.epub', '.ipynb', '.mbox', '.md', '.pdf', '.txt', '.ppt',
|
||||
'.pptm', '.pptx']
|
||||
|
||||
|
||||
# 修改后的 extract_text 函数,结合 SimpleDirectoryReader 和自定义解析逻辑
|
||||
def extract_text(file_path):
|
||||
_, ext = os.path.splitext(file_path.lower())
|
||||
|
||||
# 使用 SimpleDirectoryReader 处理它支持的文件格式
|
||||
if ext in supports_format:
|
||||
try:
|
||||
reader = SimpleDirectoryReader(input_files=[file_path])
|
||||
documents = reader.load_data()
|
||||
if len(documents) > 0:
|
||||
return documents[0].text
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
return None
|
||||
58
crazy_functions/rag_fns/vector_store_index.py
Normal file
58
crazy_functions/rag_fns/vector_store_index.py
Normal file
@@ -0,0 +1,58 @@
|
||||
from llama_index.core import VectorStoreIndex
|
||||
from typing import Any, List, Optional
|
||||
|
||||
from llama_index.core.callbacks.base import CallbackManager
|
||||
from llama_index.core.schema import TransformComponent
|
||||
from llama_index.core.service_context import ServiceContext
|
||||
from llama_index.core.settings import (
|
||||
Settings,
|
||||
callback_manager_from_settings_or_context,
|
||||
transformations_from_settings_or_context,
|
||||
)
|
||||
from llama_index.core.storage.storage_context import StorageContext
|
||||
|
||||
|
||||
class GptacVectorStoreIndex(VectorStoreIndex):
|
||||
|
||||
@classmethod
|
||||
def default_vector_store(
|
||||
cls,
|
||||
storage_context: Optional[StorageContext] = None,
|
||||
show_progress: bool = False,
|
||||
callback_manager: Optional[CallbackManager] = None,
|
||||
transformations: Optional[List[TransformComponent]] = None,
|
||||
# deprecated
|
||||
service_context: Optional[ServiceContext] = None,
|
||||
embed_model = None,
|
||||
**kwargs: Any,
|
||||
):
|
||||
"""Create index from documents.
|
||||
|
||||
Args:
|
||||
documents (Optional[Sequence[BaseDocument]]): List of documents to
|
||||
build the index from.
|
||||
|
||||
"""
|
||||
storage_context = storage_context or StorageContext.from_defaults()
|
||||
docstore = storage_context.docstore
|
||||
callback_manager = (
|
||||
callback_manager
|
||||
or callback_manager_from_settings_or_context(Settings, service_context)
|
||||
)
|
||||
transformations = transformations or transformations_from_settings_or_context(
|
||||
Settings, service_context
|
||||
)
|
||||
|
||||
with callback_manager.as_trace("index_construction"):
|
||||
|
||||
return cls(
|
||||
nodes=[],
|
||||
storage_context=storage_context,
|
||||
callback_manager=callback_manager,
|
||||
show_progress=show_progress,
|
||||
transformations=transformations,
|
||||
service_context=service_context,
|
||||
embed_model=embed_model,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
@@ -1,87 +0,0 @@
|
||||
#include "libipc/buffer.h"
|
||||
#include "libipc/utility/pimpl.h"
|
||||
|
||||
#include <cstring>
|
||||
|
||||
namespace ipc {
|
||||
|
||||
bool operator==(buffer const & b1, buffer const & b2) {
|
||||
return (b1.size() == b2.size()) && (std::memcmp(b1.data(), b2.data(), b1.size()) == 0);
|
||||
}
|
||||
|
||||
bool operator!=(buffer const & b1, buffer const & b2) {
|
||||
return !(b1 == b2);
|
||||
}
|
||||
|
||||
class buffer::buffer_ : public pimpl<buffer_> {
|
||||
public:
|
||||
void* p_;
|
||||
std::size_t s_;
|
||||
void* a_;
|
||||
buffer::destructor_t d_;
|
||||
|
||||
buffer_(void* p, std::size_t s, buffer::destructor_t d, void* a)
|
||||
: p_(p), s_(s), a_(a), d_(d) {
|
||||
}
|
||||
|
||||
~buffer_() {
|
||||
if (d_ == nullptr) return;
|
||||
d_((a_ == nullptr) ? p_ : a_, s_);
|
||||
}
|
||||
};
|
||||
|
||||
buffer::buffer()
|
||||
: buffer(nullptr, 0, nullptr, nullptr) {
|
||||
}
|
||||
|
||||
buffer::buffer(void* p, std::size_t s, destructor_t d)
|
||||
: p_(p_->make(p, s, d, nullptr)) {
|
||||
}
|
||||
|
||||
buffer::buffer(void* p, std::size_t s, destructor_t d, void* additional)
|
||||
: p_(p_->make(p, s, d, additional)) {
|
||||
}
|
||||
|
||||
buffer::buffer(void* p, std::size_t s)
|
||||
: buffer(p, s, nullptr) {
|
||||
}
|
||||
|
||||
buffer::buffer(char const & c)
|
||||
: buffer(const_cast<char*>(&c), 1) {
|
||||
}
|
||||
|
||||
buffer::buffer(buffer&& rhs)
|
||||
: buffer() {
|
||||
swap(rhs);
|
||||
}
|
||||
|
||||
buffer::~buffer() {
|
||||
p_->clear();
|
||||
}
|
||||
|
||||
void buffer::swap(buffer& rhs) {
|
||||
std::swap(p_, rhs.p_);
|
||||
}
|
||||
|
||||
buffer& buffer::operator=(buffer rhs) {
|
||||
swap(rhs);
|
||||
return *this;
|
||||
}
|
||||
|
||||
bool buffer::empty() const noexcept {
|
||||
return (impl(p_)->p_ == nullptr) || (impl(p_)->s_ == 0);
|
||||
}
|
||||
|
||||
void* buffer::data() noexcept {
|
||||
return impl(p_)->p_;
|
||||
}
|
||||
|
||||
void const * buffer::data() const noexcept {
|
||||
return impl(p_)->p_;
|
||||
}
|
||||
|
||||
std::size_t buffer::size() const noexcept {
|
||||
return impl(p_)->s_;
|
||||
}
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,701 +0,0 @@
|
||||
|
||||
#include <type_traits>
|
||||
#include <cstring>
|
||||
#include <algorithm>
|
||||
#include <utility> // std::pair, std::move, std::forward
|
||||
#include <atomic>
|
||||
#include <type_traits> // aligned_storage_t
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <array>
|
||||
#include <cassert>
|
||||
|
||||
#include "libipc/ipc.h"
|
||||
#include "libipc/def.h"
|
||||
#include "libipc/shm.h"
|
||||
#include "libipc/pool_alloc.h"
|
||||
#include "libipc/queue.h"
|
||||
#include "libipc/policy.h"
|
||||
#include "libipc/rw_lock.h"
|
||||
#include "libipc/waiter.h"
|
||||
|
||||
#include "libipc/utility/log.h"
|
||||
#include "libipc/utility/id_pool.h"
|
||||
#include "libipc/utility/scope_guard.h"
|
||||
#include "libipc/utility/utility.h"
|
||||
|
||||
#include "libipc/memory/resource.h"
|
||||
#include "libipc/platform/detail.h"
|
||||
#include "libipc/circ/elem_array.h"
|
||||
|
||||
namespace {
|
||||
|
||||
using msg_id_t = std::uint32_t;
|
||||
using acc_t = std::atomic<msg_id_t>;
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct msg_t;
|
||||
|
||||
template <std::size_t AlignSize>
|
||||
struct msg_t<0, AlignSize> {
|
||||
msg_id_t cc_id_;
|
||||
msg_id_t id_;
|
||||
std::int32_t remain_;
|
||||
bool storage_;
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct msg_t : msg_t<0, AlignSize> {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
|
||||
msg_t() = default;
|
||||
msg_t(msg_id_t cc_id, msg_id_t id, std::int32_t remain, void const * data, std::size_t size)
|
||||
: msg_t<0, AlignSize> {cc_id, id, remain, (data == nullptr) || (size == 0)} {
|
||||
if (this->storage_) {
|
||||
if (data != nullptr) {
|
||||
// copy storage-id
|
||||
*reinterpret_cast<ipc::storage_id_t*>(&data_) =
|
||||
*static_cast<ipc::storage_id_t const *>(data);
|
||||
}
|
||||
}
|
||||
else std::memcpy(&data_, data, size);
|
||||
}
|
||||
};
|
||||
|
||||
template <typename T>
|
||||
ipc::buff_t make_cache(T& data, std::size_t size) {
|
||||
auto ptr = ipc::mem::alloc(size);
|
||||
std::memcpy(ptr, &data, (ipc::detail::min)(sizeof(data), size));
|
||||
return { ptr, size, ipc::mem::free };
|
||||
}
|
||||
|
||||
struct cache_t {
|
||||
std::size_t fill_;
|
||||
ipc::buff_t buff_;
|
||||
|
||||
cache_t(std::size_t f, ipc::buff_t && b)
|
||||
: fill_(f), buff_(std::move(b))
|
||||
{}
|
||||
|
||||
void append(void const * data, std::size_t size) {
|
||||
if (fill_ >= buff_.size() || data == nullptr || size == 0) return;
|
||||
auto new_fill = (ipc::detail::min)(fill_ + size, buff_.size());
|
||||
std::memcpy(static_cast<ipc::byte_t*>(buff_.data()) + fill_, data, new_fill - fill_);
|
||||
fill_ = new_fill;
|
||||
}
|
||||
};
|
||||
|
||||
auto cc_acc() {
|
||||
static ipc::shm::handle acc_h("__CA_CONN__", sizeof(acc_t));
|
||||
return static_cast<acc_t*>(acc_h.get());
|
||||
}
|
||||
|
||||
IPC_CONSTEXPR_ std::size_t align_chunk_size(std::size_t size) noexcept {
|
||||
return (((size - 1) / ipc::large_msg_align) + 1) * ipc::large_msg_align;
|
||||
}
|
||||
|
||||
IPC_CONSTEXPR_ std::size_t calc_chunk_size(std::size_t size) noexcept {
|
||||
return ipc::make_align(alignof(std::max_align_t), align_chunk_size(
|
||||
ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic<ipc::circ::cc_t>)) + size));
|
||||
}
|
||||
|
||||
struct chunk_t {
|
||||
std::atomic<ipc::circ::cc_t> &conns() noexcept {
|
||||
return *reinterpret_cast<std::atomic<ipc::circ::cc_t> *>(this);
|
||||
}
|
||||
|
||||
void *data() noexcept {
|
||||
return reinterpret_cast<ipc::byte_t *>(this)
|
||||
+ ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic<ipc::circ::cc_t>));
|
||||
}
|
||||
};
|
||||
|
||||
struct chunk_info_t {
|
||||
ipc::id_pool<> pool_;
|
||||
ipc::spin_lock lock_;
|
||||
|
||||
IPC_CONSTEXPR_ static std::size_t chunks_mem_size(std::size_t chunk_size) noexcept {
|
||||
return ipc::id_pool<>::max_count * chunk_size;
|
||||
}
|
||||
|
||||
ipc::byte_t *chunks_mem() noexcept {
|
||||
return reinterpret_cast<ipc::byte_t *>(this + 1);
|
||||
}
|
||||
|
||||
chunk_t *at(std::size_t chunk_size, ipc::storage_id_t id) noexcept {
|
||||
if (id < 0) return nullptr;
|
||||
return reinterpret_cast<chunk_t *>(chunks_mem() + (chunk_size * id));
|
||||
}
|
||||
};
|
||||
|
||||
auto& chunk_storages() {
|
||||
class chunk_handle_t {
|
||||
ipc::shm::handle handle_;
|
||||
|
||||
public:
|
||||
chunk_info_t *get_info(std::size_t chunk_size) {
|
||||
if (!handle_.valid() &&
|
||||
!handle_.acquire( ("__CHUNK_INFO__" + ipc::to_string(chunk_size)).c_str(),
|
||||
sizeof(chunk_info_t) + chunk_info_t::chunks_mem_size(chunk_size) )) {
|
||||
ipc::error("[chunk_storages] chunk_shm.id_info_.acquire failed: chunk_size = %zd\n", chunk_size);
|
||||
return nullptr;
|
||||
}
|
||||
auto info = static_cast<chunk_info_t*>(handle_.get());
|
||||
if (info == nullptr) {
|
||||
ipc::error("[chunk_storages] chunk_shm.id_info_.get failed: chunk_size = %zd\n", chunk_size);
|
||||
return nullptr;
|
||||
}
|
||||
return info;
|
||||
}
|
||||
};
|
||||
static ipc::map<std::size_t, chunk_handle_t> chunk_hs;
|
||||
return chunk_hs;
|
||||
}
|
||||
|
||||
chunk_info_t *chunk_storage_info(std::size_t chunk_size) {
|
||||
auto &storages = chunk_storages();
|
||||
std::decay_t<decltype(storages)>::iterator it;
|
||||
{
|
||||
static ipc::rw_lock lock;
|
||||
IPC_UNUSED_ std::shared_lock<ipc::rw_lock> guard {lock};
|
||||
if ((it = storages.find(chunk_size)) == storages.end()) {
|
||||
using chunk_handle_t = std::decay_t<decltype(storages)>::value_type::second_type;
|
||||
guard.unlock();
|
||||
IPC_UNUSED_ std::lock_guard<ipc::rw_lock> guard {lock};
|
||||
it = storages.emplace(chunk_size, chunk_handle_t{}).first;
|
||||
}
|
||||
}
|
||||
return it->second.get_info(chunk_size);
|
||||
}
|
||||
|
||||
std::pair<ipc::storage_id_t, void*> acquire_storage(std::size_t size, ipc::circ::cc_t conns) {
|
||||
std::size_t chunk_size = calc_chunk_size(size);
|
||||
auto info = chunk_storage_info(chunk_size);
|
||||
if (info == nullptr) return {};
|
||||
|
||||
info->lock_.lock();
|
||||
info->pool_.prepare();
|
||||
// got an unique id
|
||||
auto id = info->pool_.acquire();
|
||||
info->lock_.unlock();
|
||||
|
||||
auto chunk = info->at(chunk_size, id);
|
||||
if (chunk == nullptr) return {};
|
||||
chunk->conns().store(conns, std::memory_order_relaxed);
|
||||
return { id, chunk->data() };
|
||||
}
|
||||
|
||||
void *find_storage(ipc::storage_id_t id, std::size_t size) {
|
||||
if (id < 0) {
|
||||
ipc::error("[find_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
|
||||
return nullptr;
|
||||
}
|
||||
std::size_t chunk_size = calc_chunk_size(size);
|
||||
auto info = chunk_storage_info(chunk_size);
|
||||
if (info == nullptr) return nullptr;
|
||||
return info->at(chunk_size, id)->data();
|
||||
}
|
||||
|
||||
void release_storage(ipc::storage_id_t id, std::size_t size) {
|
||||
if (id < 0) {
|
||||
ipc::error("[release_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
|
||||
return;
|
||||
}
|
||||
std::size_t chunk_size = calc_chunk_size(size);
|
||||
auto info = chunk_storage_info(chunk_size);
|
||||
if (info == nullptr) return;
|
||||
info->lock_.lock();
|
||||
info->pool_.release(id);
|
||||
info->lock_.unlock();
|
||||
}
|
||||
|
||||
template <ipc::relat Rp, ipc::relat Rc>
|
||||
bool sub_rc(ipc::wr<Rp, Rc, ipc::trans::unicast>,
|
||||
std::atomic<ipc::circ::cc_t> &/*conns*/, ipc::circ::cc_t /*curr_conns*/, ipc::circ::cc_t /*conn_id*/) noexcept {
|
||||
return true;
|
||||
}
|
||||
|
||||
template <ipc::relat Rp, ipc::relat Rc>
|
||||
bool sub_rc(ipc::wr<Rp, Rc, ipc::trans::broadcast>,
|
||||
std::atomic<ipc::circ::cc_t> &conns, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) noexcept {
|
||||
auto last_conns = curr_conns & ~conn_id;
|
||||
for (unsigned k = 0;;) {
|
||||
auto chunk_conns = conns.load(std::memory_order_acquire);
|
||||
if (conns.compare_exchange_weak(chunk_conns, chunk_conns & last_conns, std::memory_order_release)) {
|
||||
return (chunk_conns & last_conns) == 0;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
void recycle_storage(ipc::storage_id_t id, std::size_t size, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) {
|
||||
if (id < 0) {
|
||||
ipc::error("[recycle_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
|
||||
return;
|
||||
}
|
||||
std::size_t chunk_size = calc_chunk_size(size);
|
||||
auto info = chunk_storage_info(chunk_size);
|
||||
if (info == nullptr) return;
|
||||
|
||||
auto chunk = info->at(chunk_size, id);
|
||||
if (chunk == nullptr) return;
|
||||
|
||||
if (!sub_rc(Flag{}, chunk->conns(), curr_conns, conn_id)) {
|
||||
return;
|
||||
}
|
||||
info->lock_.lock();
|
||||
info->pool_.release(id);
|
||||
info->lock_.unlock();
|
||||
}
|
||||
|
||||
template <typename MsgT>
|
||||
bool clear_message(void* p) {
|
||||
auto msg = static_cast<MsgT*>(p);
|
||||
if (msg->storage_) {
|
||||
std::int32_t r_size = static_cast<std::int32_t>(ipc::data_length) + msg->remain_;
|
||||
if (r_size <= 0) {
|
||||
ipc::error("[clear_message] invalid msg size: %d\n", (int)r_size);
|
||||
return true;
|
||||
}
|
||||
release_storage(
|
||||
*reinterpret_cast<ipc::storage_id_t*>(&msg->data_),
|
||||
static_cast<std::size_t>(r_size));
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
struct conn_info_head {
|
||||
|
||||
ipc::string name_;
|
||||
msg_id_t cc_id_; // connection-info id
|
||||
ipc::detail::waiter cc_waiter_, wt_waiter_, rd_waiter_;
|
||||
ipc::shm::handle acc_h_;
|
||||
|
||||
conn_info_head(char const * name)
|
||||
: name_ {name}
|
||||
, cc_id_ {(cc_acc() == nullptr) ? 0 : cc_acc()->fetch_add(1, std::memory_order_relaxed)}
|
||||
, cc_waiter_{("__CC_CONN__" + name_).c_str()}
|
||||
, wt_waiter_{("__WT_CONN__" + name_).c_str()}
|
||||
, rd_waiter_{("__RD_CONN__" + name_).c_str()}
|
||||
, acc_h_ {("__AC_CONN__" + name_).c_str(), sizeof(acc_t)} {
|
||||
}
|
||||
|
||||
void quit_waiting() {
|
||||
cc_waiter_.quit_waiting();
|
||||
wt_waiter_.quit_waiting();
|
||||
rd_waiter_.quit_waiting();
|
||||
}
|
||||
|
||||
auto acc() {
|
||||
return static_cast<acc_t*>(acc_h_.get());
|
||||
}
|
||||
|
||||
auto& recv_cache() {
|
||||
thread_local ipc::unordered_map<msg_id_t, cache_t> tls;
|
||||
return tls;
|
||||
}
|
||||
};
|
||||
|
||||
template <typename W, typename F>
|
||||
bool wait_for(W& waiter, F&& pred, std::uint64_t tm) {
|
||||
if (tm == 0) return !pred();
|
||||
for (unsigned k = 0; pred();) {
|
||||
bool ret = true;
|
||||
ipc::sleep(k, [&k, &ret, &waiter, &pred, tm] {
|
||||
ret = waiter.wait_if(std::forward<F>(pred), tm);
|
||||
k = 0;
|
||||
});
|
||||
if (!ret) return false; // timeout or fail
|
||||
if (k == 0) break; // k has been reset
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename Policy,
|
||||
std::size_t DataSize = ipc::data_length,
|
||||
std::size_t AlignSize = (ipc::detail::min)(DataSize, alignof(std::max_align_t))>
|
||||
struct queue_generator {
|
||||
|
||||
using queue_t = ipc::queue<msg_t<DataSize, AlignSize>, Policy>;
|
||||
|
||||
struct conn_info_t : conn_info_head {
|
||||
queue_t que_;
|
||||
|
||||
conn_info_t(char const * name)
|
||||
: conn_info_head{name}
|
||||
, que_{("__QU_CONN__" +
|
||||
ipc::to_string(DataSize) + "__" +
|
||||
ipc::to_string(AlignSize) + "__" + name).c_str()} {
|
||||
}
|
||||
|
||||
void disconnect_receiver() {
|
||||
bool dis = que_.disconnect();
|
||||
this->quit_waiting();
|
||||
if (dis) {
|
||||
this->recv_cache().clear();
|
||||
}
|
||||
}
|
||||
};
|
||||
};
|
||||
|
||||
template <typename Policy>
|
||||
struct detail_impl {
|
||||
|
||||
using policy_t = Policy;
|
||||
using flag_t = typename policy_t::flag_t;
|
||||
using queue_t = typename queue_generator<policy_t>::queue_t;
|
||||
using conn_info_t = typename queue_generator<policy_t>::conn_info_t;
|
||||
|
||||
constexpr static conn_info_t* info_of(ipc::handle_t h) noexcept {
|
||||
return static_cast<conn_info_t*>(h);
|
||||
}
|
||||
|
||||
constexpr static queue_t* queue_of(ipc::handle_t h) noexcept {
|
||||
return (info_of(h) == nullptr) ? nullptr : &(info_of(h)->que_);
|
||||
}
|
||||
|
||||
/* API implementations */
|
||||
|
||||
static void disconnect(ipc::handle_t h) {
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
return;
|
||||
}
|
||||
que->shut_sending();
|
||||
assert(info_of(h) != nullptr);
|
||||
info_of(h)->disconnect_receiver();
|
||||
}
|
||||
|
||||
static bool reconnect(ipc::handle_t * ph, bool start_to_recv) {
|
||||
assert(ph != nullptr);
|
||||
assert(*ph != nullptr);
|
||||
auto que = queue_of(*ph);
|
||||
if (que == nullptr) {
|
||||
return false;
|
||||
}
|
||||
if (start_to_recv) {
|
||||
que->shut_sending();
|
||||
if (que->connect()) { // wouldn't connect twice
|
||||
info_of(*ph)->cc_waiter_.broadcast();
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
// start_to_recv == false
|
||||
if (que->connected()) {
|
||||
info_of(*ph)->disconnect_receiver();
|
||||
}
|
||||
return que->ready_sending();
|
||||
}
|
||||
|
||||
static bool connect(ipc::handle_t * ph, char const * name, bool start_to_recv) {
|
||||
assert(ph != nullptr);
|
||||
if (*ph == nullptr) {
|
||||
*ph = ipc::mem::alloc<conn_info_t>(name);
|
||||
}
|
||||
return reconnect(ph, start_to_recv);
|
||||
}
|
||||
|
||||
static void destroy(ipc::handle_t h) {
|
||||
disconnect(h);
|
||||
ipc::mem::free(info_of(h));
|
||||
}
|
||||
|
||||
static std::size_t recv_count(ipc::handle_t h) noexcept {
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
return ipc::invalid_value;
|
||||
}
|
||||
return que->conn_count();
|
||||
}
|
||||
|
||||
static bool wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) {
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
return false;
|
||||
}
|
||||
return wait_for(info_of(h)->cc_waiter_, [que, r_count] {
|
||||
return que->conn_count() < r_count;
|
||||
}, tm);
|
||||
}
|
||||
|
||||
template <typename F>
|
||||
static bool send(F&& gen_push, ipc::handle_t h, void const * data, std::size_t size) {
|
||||
if (data == nullptr || size == 0) {
|
||||
ipc::error("fail: send(%p, %zd)\n", data, size);
|
||||
return false;
|
||||
}
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
ipc::error("fail: send, queue_of(h) == nullptr\n");
|
||||
return false;
|
||||
}
|
||||
if (que->elems() == nullptr) {
|
||||
ipc::error("fail: send, queue_of(h)->elems() == nullptr\n");
|
||||
return false;
|
||||
}
|
||||
if (!que->ready_sending()) {
|
||||
ipc::error("fail: send, que->ready_sending() == false\n");
|
||||
return false;
|
||||
}
|
||||
ipc::circ::cc_t conns = que->elems()->connections(std::memory_order_relaxed);
|
||||
if (conns == 0) {
|
||||
ipc::error("fail: send, there is no receiver on this connection.\n");
|
||||
return false;
|
||||
}
|
||||
// calc a new message id
|
||||
auto acc = info_of(h)->acc();
|
||||
if (acc == nullptr) {
|
||||
ipc::error("fail: send, info_of(h)->acc() == nullptr\n");
|
||||
return false;
|
||||
}
|
||||
auto msg_id = acc->fetch_add(1, std::memory_order_relaxed);
|
||||
auto try_push = std::forward<F>(gen_push)(info_of(h), que, msg_id);
|
||||
if (size > ipc::large_msg_limit) {
|
||||
auto dat = acquire_storage(size, conns);
|
||||
void * buf = dat.second;
|
||||
if (buf != nullptr) {
|
||||
std::memcpy(buf, data, size);
|
||||
return try_push(static_cast<std::int32_t>(size) -
|
||||
static_cast<std::int32_t>(ipc::data_length), &(dat.first), 0);
|
||||
}
|
||||
// try using message fragment
|
||||
//ipc::log("fail: shm::handle for big message. msg_id: %zd, size: %zd\n", msg_id, size);
|
||||
}
|
||||
// push message fragment
|
||||
std::int32_t offset = 0;
|
||||
for (std::int32_t i = 0; i < static_cast<std::int32_t>(size / ipc::data_length); ++i, offset += ipc::data_length) {
|
||||
if (!try_push(static_cast<std::int32_t>(size) - offset - static_cast<std::int32_t>(ipc::data_length),
|
||||
static_cast<ipc::byte_t const *>(data) + offset, ipc::data_length)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
// if remain > 0, this is the last message fragment
|
||||
std::int32_t remain = static_cast<std::int32_t>(size) - offset;
|
||||
if (remain > 0) {
|
||||
if (!try_push(remain - static_cast<std::int32_t>(ipc::data_length),
|
||||
static_cast<ipc::byte_t const *>(data) + offset,
|
||||
static_cast<std::size_t>(remain))) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
|
||||
return send([tm](auto info, auto que, auto msg_id) {
|
||||
return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) {
|
||||
if (!wait_for(info->wt_waiter_, [&] {
|
||||
return !que->push(
|
||||
[](void*) { return true; },
|
||||
info->cc_id_, msg_id, remain, data, size);
|
||||
}, tm)) {
|
||||
ipc::log("force_push: msg_id = %zd, remain = %d, size = %zd\n", msg_id, remain, size);
|
||||
if (!que->force_push(
|
||||
clear_message<typename queue_t::value_t>,
|
||||
info->cc_id_, msg_id, remain, data, size)) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
info->rd_waiter_.broadcast();
|
||||
return true;
|
||||
};
|
||||
}, h, data, size);
|
||||
}
|
||||
|
||||
static bool try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
|
||||
return send([tm](auto info, auto que, auto msg_id) {
|
||||
return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) {
|
||||
if (!wait_for(info->wt_waiter_, [&] {
|
||||
return !que->push(
|
||||
[](void*) { return true; },
|
||||
info->cc_id_, msg_id, remain, data, size);
|
||||
}, tm)) {
|
||||
return false;
|
||||
}
|
||||
info->rd_waiter_.broadcast();
|
||||
return true;
|
||||
};
|
||||
}, h, data, size);
|
||||
}
|
||||
|
||||
static ipc::buff_t recv(ipc::handle_t h, std::uint64_t tm) {
|
||||
auto que = queue_of(h);
|
||||
if (que == nullptr) {
|
||||
ipc::error("fail: recv, queue_of(h) == nullptr\n");
|
||||
return {};
|
||||
}
|
||||
if (!que->connected()) {
|
||||
// hasn't connected yet, just return.
|
||||
return {};
|
||||
}
|
||||
auto& rc = info_of(h)->recv_cache();
|
||||
for (;;) {
|
||||
// pop a new message
|
||||
typename queue_t::value_t msg;
|
||||
if (!wait_for(info_of(h)->rd_waiter_, [que, &msg] {
|
||||
return !que->pop(msg);
|
||||
}, tm)) {
|
||||
// pop failed, just return.
|
||||
return {};
|
||||
}
|
||||
info_of(h)->wt_waiter_.broadcast();
|
||||
if ((info_of(h)->acc() != nullptr) && (msg.cc_id_ == info_of(h)->cc_id_)) {
|
||||
continue; // ignore message to self
|
||||
}
|
||||
// msg.remain_ may minus & abs(msg.remain_) < data_length
|
||||
std::int32_t r_size = static_cast<std::int32_t>(ipc::data_length) + msg.remain_;
|
||||
if (r_size <= 0) {
|
||||
ipc::error("fail: recv, r_size = %d\n", (int)r_size);
|
||||
return {};
|
||||
}
|
||||
std::size_t msg_size = static_cast<std::size_t>(r_size);
|
||||
// large message
|
||||
if (msg.storage_) {
|
||||
ipc::storage_id_t buf_id = *reinterpret_cast<ipc::storage_id_t*>(&msg.data_);
|
||||
void* buf = find_storage(buf_id, msg_size);
|
||||
if (buf != nullptr) {
|
||||
struct recycle_t {
|
||||
ipc::storage_id_t storage_id;
|
||||
ipc::circ::cc_t curr_conns;
|
||||
ipc::circ::cc_t conn_id;
|
||||
} *r_info = ipc::mem::alloc<recycle_t>(recycle_t{
|
||||
buf_id, que->elems()->connections(std::memory_order_relaxed), que->connected_id()
|
||||
});
|
||||
if (r_info == nullptr) {
|
||||
ipc::log("fail: ipc::mem::alloc<recycle_t>.\n");
|
||||
return ipc::buff_t{buf, msg_size}; // no recycle
|
||||
} else {
|
||||
return ipc::buff_t{buf, msg_size, [](void* p_info, std::size_t size) {
|
||||
auto r_info = static_cast<recycle_t *>(p_info);
|
||||
IPC_UNUSED_ auto finally = ipc::guard([r_info] {
|
||||
ipc::mem::free(r_info);
|
||||
});
|
||||
recycle_storage<flag_t>(r_info->storage_id, size, r_info->curr_conns, r_info->conn_id);
|
||||
}, r_info};
|
||||
}
|
||||
} else {
|
||||
ipc::log("fail: shm::handle for large message. msg_id: %zd, buf_id: %zd, size: %zd\n", msg.id_, buf_id, msg_size);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
// find cache with msg.id_
|
||||
auto cac_it = rc.find(msg.id_);
|
||||
if (cac_it == rc.end()) {
|
||||
if (msg_size <= ipc::data_length) {
|
||||
return make_cache(msg.data_, msg_size);
|
||||
}
|
||||
// gc
|
||||
if (rc.size() > 1024) {
|
||||
std::vector<msg_id_t> need_del;
|
||||
for (auto const & pair : rc) {
|
||||
auto cmp = std::minmax(msg.id_, pair.first);
|
||||
if (cmp.second - cmp.first > 8192) {
|
||||
need_del.push_back(pair.first);
|
||||
}
|
||||
}
|
||||
for (auto id : need_del) rc.erase(id);
|
||||
}
|
||||
// cache the first message fragment
|
||||
rc.emplace(msg.id_, cache_t { ipc::data_length, make_cache(msg.data_, msg_size) });
|
||||
}
|
||||
// has cached before this message
|
||||
else {
|
||||
auto& cac = cac_it->second;
|
||||
// this is the last message fragment
|
||||
if (msg.remain_ <= 0) {
|
||||
cac.append(&(msg.data_), msg_size);
|
||||
// finish this message, erase it from cache
|
||||
auto buff = std::move(cac.buff_);
|
||||
rc.erase(cac_it);
|
||||
return buff;
|
||||
}
|
||||
// there are remain datas after this message
|
||||
cac.append(&(msg.data_), ipc::data_length);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static ipc::buff_t try_recv(ipc::handle_t h) {
|
||||
return recv(h, 0);
|
||||
}
|
||||
|
||||
}; // detail_impl<Policy>
|
||||
|
||||
template <typename Flag>
|
||||
using policy_t = ipc::policy::choose<ipc::circ::elem_array, Flag>;
|
||||
|
||||
} // internal-linkage
|
||||
|
||||
namespace ipc {
|
||||
|
||||
template <typename Flag>
|
||||
ipc::handle_t chan_impl<Flag>::inited() {
|
||||
ipc::detail::waiter::init();
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::connect(ipc::handle_t * ph, char const * name, unsigned mode) {
|
||||
return detail_impl<policy_t<Flag>>::connect(ph, name, mode & receiver);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::reconnect(ipc::handle_t * ph, unsigned mode) {
|
||||
return detail_impl<policy_t<Flag>>::reconnect(ph, mode & receiver);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
void chan_impl<Flag>::disconnect(ipc::handle_t h) {
|
||||
detail_impl<policy_t<Flag>>::disconnect(h);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
void chan_impl<Flag>::destroy(ipc::handle_t h) {
|
||||
detail_impl<policy_t<Flag>>::destroy(h);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
char const * chan_impl<Flag>::name(ipc::handle_t h) {
|
||||
auto info = detail_impl<policy_t<Flag>>::info_of(h);
|
||||
return (info == nullptr) ? nullptr : info->name_.c_str();
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
std::size_t chan_impl<Flag>::recv_count(ipc::handle_t h) {
|
||||
return detail_impl<policy_t<Flag>>::recv_count(h);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) {
|
||||
return detail_impl<policy_t<Flag>>::wait_for_recv(h, r_count, tm);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
|
||||
return detail_impl<policy_t<Flag>>::send(h, data, size, tm);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
buff_t chan_impl<Flag>::recv(ipc::handle_t h, std::uint64_t tm) {
|
||||
return detail_impl<policy_t<Flag>>::recv(h, tm);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
bool chan_impl<Flag>::try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
|
||||
return detail_impl<policy_t<Flag>>::try_send(h, data, size, tm);
|
||||
}
|
||||
|
||||
template <typename Flag>
|
||||
buff_t chan_impl<Flag>::try_recv(ipc::handle_t h) {
|
||||
return detail_impl<policy_t<Flag>>::try_recv(h);
|
||||
}
|
||||
|
||||
template struct chan_impl<ipc::wr<relat::single, relat::single, trans::unicast >>;
|
||||
// template struct chan_impl<ipc::wr<relat::single, relat::multi , trans::unicast >>; // TBD
|
||||
// template struct chan_impl<ipc::wr<relat::multi , relat::multi , trans::unicast >>; // TBD
|
||||
template struct chan_impl<ipc::wr<relat::single, relat::multi , trans::broadcast>>;
|
||||
template struct chan_impl<ipc::wr<relat::multi , relat::multi , trans::broadcast>>;
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,25 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <type_traits>
|
||||
|
||||
#include "libipc/def.h"
|
||||
#include "libipc/prod_cons.h"
|
||||
|
||||
#include "libipc/circ/elem_array.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace policy {
|
||||
|
||||
template <template <typename, std::size_t...> class Elems, typename Flag>
|
||||
struct choose;
|
||||
|
||||
template <typename Flag>
|
||||
struct choose<circ::elem_array, Flag> {
|
||||
using flag_t = Flag;
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
using elems_t = circ::elem_array<ipc::prod_cons_impl<flag_t>, DataSize, AlignSize>;
|
||||
};
|
||||
|
||||
} // namespace policy
|
||||
} // namespace ipc
|
||||
@@ -1,17 +0,0 @@
|
||||
#include "libipc/pool_alloc.h"
|
||||
|
||||
#include "libipc/memory/resource.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace mem {
|
||||
|
||||
void* pool_alloc::alloc(std::size_t size) {
|
||||
return async_pool_alloc::alloc(size);
|
||||
}
|
||||
|
||||
void pool_alloc::free(void* p, std::size_t size) {
|
||||
async_pool_alloc::free(p, size);
|
||||
}
|
||||
|
||||
} // namespace mem
|
||||
} // namespace ipc
|
||||
@@ -1,433 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <atomic>
|
||||
#include <utility>
|
||||
#include <cstring>
|
||||
#include <type_traits>
|
||||
#include <cstdint>
|
||||
|
||||
#include "libipc/def.h"
|
||||
|
||||
#include "libipc/platform/detail.h"
|
||||
#include "libipc/circ/elem_def.h"
|
||||
#include "libipc/utility/log.h"
|
||||
#include "libipc/utility/utility.h"
|
||||
|
||||
namespace ipc {
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
/// producer-consumer implementation
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
template <typename Flag>
|
||||
struct prod_cons_impl;
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> rd_; // read index
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
||||
|
||||
constexpr circ::u2_t cursor() const noexcept {
|
||||
return 0;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
||||
auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
|
||||
return false; // full
|
||||
}
|
||||
std::forward<F>(f)(&(elems[cur_wt].data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
|
||||
* So we could just disconnect all connections of receiver, and return false.
|
||||
*/
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(~static_cast<circ::cc_t>(0u));
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
|
||||
auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
|
||||
if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
|
||||
return false; // empty
|
||||
}
|
||||
std::forward<F>(f)(&(elems[cur_rd].data_));
|
||||
std::forward<R>(out)(true);
|
||||
rd_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::multi , trans::unicast>>
|
||||
: prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R,
|
||||
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
||||
byte_t buff[DS];
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
||||
if (circ::index_of(cur_rd) ==
|
||||
circ::index_of(wt_.load(std::memory_order_acquire))) {
|
||||
return false; // empty
|
||||
}
|
||||
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
||||
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
||||
std::forward<F>(f)(buff);
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::multi , relat::multi, trans::unicast>>
|
||||
: prod_cons_impl<wr<relat::single, relat::multi, trans::unicast>> {
|
||||
|
||||
using flag_t = std::uint64_t;
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
||||
circ::u2_t cur_ct, nxt_ct;
|
||||
for (unsigned k = 0;;) {
|
||||
cur_ct = ct_.load(std::memory_order_relaxed);
|
||||
if (circ::index_of(nxt_ct = cur_ct + 1) ==
|
||||
circ::index_of(rd_.load(std::memory_order_acquire))) {
|
||||
return false; // full
|
||||
}
|
||||
if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
auto* el = elems + circ::index_of(cur_ct);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
while (1) {
|
||||
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
||||
if (cur_ct != wt_.load(std::memory_order_relaxed)) {
|
||||
return true;
|
||||
}
|
||||
if ((~cac_ct) != cur_ct) {
|
||||
return true;
|
||||
}
|
||||
if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
|
||||
return true;
|
||||
}
|
||||
wt_.store(nxt_ct, std::memory_order_release);
|
||||
cur_ct = nxt_ct;
|
||||
nxt_ct = cur_ct + 1;
|
||||
el = elems + circ::index_of(cur_ct);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R,
|
||||
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
||||
byte_t buff[DS];
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
||||
auto cur_wt = wt_.load(std::memory_order_acquire);
|
||||
auto id_rd = circ::index_of(cur_rd);
|
||||
auto id_wt = circ::index_of(cur_wt);
|
||||
if (id_rd == id_wt) {
|
||||
auto* el = elems + id_wt;
|
||||
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
||||
if ((~cac_ct) != cur_wt) {
|
||||
return false; // empty
|
||||
}
|
||||
if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
|
||||
wt_.store(cur_wt + 1, std::memory_order_release);
|
||||
}
|
||||
k = 0;
|
||||
}
|
||||
else {
|
||||
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
||||
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
||||
std::forward<F>(f)(buff);
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::multi, trans::broadcast>> {
|
||||
|
||||
using rc_t = std::uint64_t;
|
||||
|
||||
enum : rc_t {
|
||||
ep_mask = 0x00000000ffffffffull,
|
||||
ep_incr = 0x0000000100000000ull
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<rc_t> rc_ { 0 }; // read-counter
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
||||
alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
|
||||
|
||||
circ::u2_t cursor() const noexcept {
|
||||
return wt_.load(std::memory_order_acquire);
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & ep_mask;
|
||||
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
|
||||
return false; // has not finished yet
|
||||
}
|
||||
// consider rem_cc to be 0 here
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
epoch_ += ep_incr;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & ep_mask;
|
||||
if (cc & rem_cc) {
|
||||
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
||||
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
||||
if (cc == 0) return false; // no reader
|
||||
}
|
||||
// just compare & exchange
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E>
|
||||
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
|
||||
if (cur == cursor()) return false; // acquire
|
||||
auto* el = elems + circ::index_of(cur++);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
if ((cur_rc & ep_mask) == 0) {
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
auto nxt_rc = cur_rc & ~static_cast<rc_t>(wrapper->connected_id());
|
||||
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
||||
std::forward<R>(out)((nxt_rc & ep_mask) == 0);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::multi, relat::multi, trans::broadcast>> {
|
||||
|
||||
using rc_t = std::uint64_t;
|
||||
using flag_t = std::uint64_t;
|
||||
|
||||
enum : rc_t {
|
||||
rc_mask = 0x00000000ffffffffull,
|
||||
ep_mask = 0x00ffffffffffffffull,
|
||||
ep_incr = 0x0100000000000000ull,
|
||||
ic_mask = 0xff000000ffffffffull,
|
||||
ic_incr = 0x0000000100000000ull
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<rc_t > rc_ { 0 }; // read-counter
|
||||
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
||||
alignas(cache_line_size) std::atomic<rc_t> epoch_ { 0 };
|
||||
|
||||
circ::u2_t cursor() const noexcept {
|
||||
return ct_.load(std::memory_order_acquire);
|
||||
}
|
||||
|
||||
constexpr static rc_t inc_rc(rc_t rc) noexcept {
|
||||
return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
|
||||
}
|
||||
|
||||
constexpr static rc_t inc_mask(rc_t rc) noexcept {
|
||||
return inc_rc(rc) & ~rc_mask;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
circ::u2_t cur_ct;
|
||||
rc_t epoch = epoch_.load(std::memory_order_acquire);
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_relaxed);
|
||||
circ::cc_t rem_cc = cur_rc & rc_mask;
|
||||
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
|
||||
return false; // has not finished yet
|
||||
}
|
||||
else if (!rem_cc) {
|
||||
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
||||
if ((cur_fl != cur_ct) && cur_fl) {
|
||||
return false; // full
|
||||
}
|
||||
}
|
||||
// consider rem_cc to be 0 here
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed) &&
|
||||
epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
// only one thread/process would touch here at one time
|
||||
ct_.store(cur_ct + 1, std::memory_order_release);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
circ::u2_t cur_ct;
|
||||
rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & rc_mask;
|
||||
if (cc & rem_cc) {
|
||||
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
||||
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
||||
if (cc == 0) return false; // no reader
|
||||
}
|
||||
// just compare & exchange
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed)) {
|
||||
if (epoch == epoch_.load(std::memory_order_acquire)) {
|
||||
break;
|
||||
}
|
||||
else if (push(wrapper, std::forward<F>(f), elems)) {
|
||||
return true;
|
||||
}
|
||||
epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
// only one thread/process would touch here at one time
|
||||
ct_.store(cur_ct + 1, std::memory_order_release);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E, std::size_t N>
|
||||
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
|
||||
auto* el = elems + circ::index_of(cur);
|
||||
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
||||
if (cur_fl != ~static_cast<flag_t>(cur)) {
|
||||
return false; // empty
|
||||
}
|
||||
++cur;
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
if ((cur_rc & rc_mask) == 0) {
|
||||
std::forward<R>(out)(true);
|
||||
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
auto nxt_rc = inc_rc(cur_rc) & ~static_cast<rc_t>(wrapper->connected_id());
|
||||
bool last_one = false;
|
||||
if ((last_one = (nxt_rc & rc_mask) == 0)) {
|
||||
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
||||
}
|
||||
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
||||
std::forward<R>(out)(last_one);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,216 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <type_traits>
|
||||
#include <new>
|
||||
#include <utility> // [[since C++14]]: std::exchange
|
||||
#include <algorithm>
|
||||
#include <atomic>
|
||||
#include <tuple>
|
||||
#include <thread>
|
||||
#include <chrono>
|
||||
#include <string>
|
||||
#include <cassert> // assert
|
||||
|
||||
#include "libipc/def.h"
|
||||
#include "libipc/shm.h"
|
||||
#include "libipc/rw_lock.h"
|
||||
|
||||
#include "libipc/utility/log.h"
|
||||
#include "libipc/platform/detail.h"
|
||||
#include "libipc/circ/elem_def.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace detail {
|
||||
|
||||
class queue_conn {
|
||||
protected:
|
||||
circ::cc_t connected_ = 0;
|
||||
shm::handle elems_h_;
|
||||
|
||||
template <typename Elems>
|
||||
Elems* open(char const * name) {
|
||||
if (name == nullptr || name[0] == '\0') {
|
||||
ipc::error("fail open waiter: name is empty!\n");
|
||||
return nullptr;
|
||||
}
|
||||
if (!elems_h_.acquire(name, sizeof(Elems))) {
|
||||
return nullptr;
|
||||
}
|
||||
auto elems = static_cast<Elems*>(elems_h_.get());
|
||||
if (elems == nullptr) {
|
||||
ipc::error("fail acquire elems: %s\n", name);
|
||||
return nullptr;
|
||||
}
|
||||
elems->init();
|
||||
return elems;
|
||||
}
|
||||
|
||||
void close() {
|
||||
elems_h_.release();
|
||||
}
|
||||
|
||||
public:
|
||||
queue_conn() = default;
|
||||
queue_conn(const queue_conn&) = delete;
|
||||
queue_conn& operator=(const queue_conn&) = delete;
|
||||
|
||||
bool connected() const noexcept {
|
||||
return connected_ != 0;
|
||||
}
|
||||
|
||||
circ::cc_t connected_id() const noexcept {
|
||||
return connected_;
|
||||
}
|
||||
|
||||
template <typename Elems>
|
||||
auto connect(Elems* elems) noexcept
|
||||
/*needs 'optional' here*/
|
||||
-> std::tuple<bool, bool, decltype(std::declval<Elems>().cursor())> {
|
||||
if (elems == nullptr) return {};
|
||||
// if it's already connected, just return
|
||||
if (connected()) return {connected(), false, 0};
|
||||
connected_ = elems->connect_receiver();
|
||||
return {connected(), true, elems->cursor()};
|
||||
}
|
||||
|
||||
template <typename Elems>
|
||||
bool disconnect(Elems* elems) noexcept {
|
||||
if (elems == nullptr) return false;
|
||||
// if it's already disconnected, just return false
|
||||
if (!connected()) return false;
|
||||
elems->disconnect_receiver(std::exchange(connected_, 0));
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
||||
template <typename Elems>
|
||||
class queue_base : public queue_conn {
|
||||
using base_t = queue_conn;
|
||||
|
||||
public:
|
||||
using elems_t = Elems;
|
||||
using policy_t = typename elems_t::policy_t;
|
||||
|
||||
protected:
|
||||
elems_t * elems_ = nullptr;
|
||||
decltype(std::declval<elems_t>().cursor()) cursor_ = 0;
|
||||
bool sender_flag_ = false;
|
||||
|
||||
public:
|
||||
using base_t::base_t;
|
||||
|
||||
queue_base() = default;
|
||||
|
||||
explicit queue_base(char const * name)
|
||||
: queue_base{} {
|
||||
elems_ = open<elems_t>(name);
|
||||
}
|
||||
|
||||
explicit queue_base(elems_t * elems) noexcept
|
||||
: queue_base{} {
|
||||
assert(elems != nullptr);
|
||||
elems_ = elems;
|
||||
}
|
||||
|
||||
/* not virtual */ ~queue_base() {
|
||||
base_t::close();
|
||||
}
|
||||
|
||||
elems_t * elems() noexcept { return elems_; }
|
||||
elems_t const * elems() const noexcept { return elems_; }
|
||||
|
||||
bool ready_sending() noexcept {
|
||||
if (elems_ == nullptr) return false;
|
||||
return sender_flag_ || (sender_flag_ = elems_->connect_sender());
|
||||
}
|
||||
|
||||
void shut_sending() noexcept {
|
||||
if (elems_ == nullptr) return;
|
||||
if (!sender_flag_) return;
|
||||
elems_->disconnect_sender();
|
||||
}
|
||||
|
||||
bool connect() noexcept {
|
||||
auto tp = base_t::connect(elems_);
|
||||
if (std::get<0>(tp) && std::get<1>(tp)) {
|
||||
cursor_ = std::get<2>(tp);
|
||||
return true;
|
||||
}
|
||||
return std::get<0>(tp);
|
||||
}
|
||||
|
||||
bool disconnect() noexcept {
|
||||
return base_t::disconnect(elems_);
|
||||
}
|
||||
|
||||
std::size_t conn_count() const noexcept {
|
||||
return (elems_ == nullptr) ? static_cast<std::size_t>(invalid_value) : elems_->conn_count();
|
||||
}
|
||||
|
||||
bool valid() const noexcept {
|
||||
return elems_ != nullptr;
|
||||
}
|
||||
|
||||
bool empty() const noexcept {
|
||||
return !valid() || (cursor_ == elems_->cursor());
|
||||
}
|
||||
|
||||
template <typename T, typename F, typename... P>
|
||||
bool push(F&& prep, P&&... params) {
|
||||
if (elems_ == nullptr) return false;
|
||||
return elems_->push(this, [&](void* p) {
|
||||
if (prep(p)) ::new (p) T(std::forward<P>(params)...);
|
||||
});
|
||||
}
|
||||
|
||||
template <typename T, typename F, typename... P>
|
||||
bool force_push(F&& prep, P&&... params) {
|
||||
if (elems_ == nullptr) return false;
|
||||
return elems_->force_push(this, [&](void* p) {
|
||||
if (prep(p)) ::new (p) T(std::forward<P>(params)...);
|
||||
});
|
||||
}
|
||||
|
||||
template <typename T, typename F>
|
||||
bool pop(T& item, F&& out) {
|
||||
if (elems_ == nullptr) {
|
||||
return false;
|
||||
}
|
||||
return elems_->pop(this, &(this->cursor_), [&item](void* p) {
|
||||
::new (&item) T(std::move(*static_cast<T*>(p)));
|
||||
}, std::forward<F>(out));
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace detail
|
||||
|
||||
template <typename T, typename Policy>
|
||||
class queue final : public detail::queue_base<typename Policy::template elems_t<sizeof(T), alignof(T)>> {
|
||||
using base_t = detail::queue_base<typename Policy::template elems_t<sizeof(T), alignof(T)>>;
|
||||
|
||||
public:
|
||||
using value_t = T;
|
||||
|
||||
using base_t::base_t;
|
||||
|
||||
template <typename... P>
|
||||
bool push(P&&... params) {
|
||||
return base_t::template push<T>(std::forward<P>(params)...);
|
||||
}
|
||||
|
||||
template <typename... P>
|
||||
bool force_push(P&&... params) {
|
||||
return base_t::template force_push<T>(std::forward<P>(params)...);
|
||||
}
|
||||
|
||||
bool pop(T& item) {
|
||||
return base_t::pop(item, [](bool) {});
|
||||
}
|
||||
|
||||
template <typename F>
|
||||
bool pop(T& item, F&& out) {
|
||||
return base_t::pop(item, std::forward<F>(out));
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,103 +0,0 @@
|
||||
|
||||
#include <string>
|
||||
#include <utility>
|
||||
|
||||
#include "libipc/shm.h"
|
||||
|
||||
#include "libipc/utility/pimpl.h"
|
||||
#include "libipc/memory/resource.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace shm {
|
||||
|
||||
class handle::handle_ : public pimpl<handle_> {
|
||||
public:
|
||||
shm::id_t id_ = nullptr;
|
||||
void* m_ = nullptr;
|
||||
|
||||
ipc::string n_;
|
||||
std::size_t s_ = 0;
|
||||
};
|
||||
|
||||
handle::handle()
|
||||
: p_(p_->make()) {
|
||||
}
|
||||
|
||||
handle::handle(char const * name, std::size_t size, unsigned mode)
|
||||
: handle() {
|
||||
acquire(name, size, mode);
|
||||
}
|
||||
|
||||
handle::handle(handle&& rhs)
|
||||
: handle() {
|
||||
swap(rhs);
|
||||
}
|
||||
|
||||
handle::~handle() {
|
||||
release();
|
||||
p_->clear();
|
||||
}
|
||||
|
||||
void handle::swap(handle& rhs) {
|
||||
std::swap(p_, rhs.p_);
|
||||
}
|
||||
|
||||
handle& handle::operator=(handle rhs) {
|
||||
swap(rhs);
|
||||
return *this;
|
||||
}
|
||||
|
||||
bool handle::valid() const noexcept {
|
||||
return impl(p_)->m_ != nullptr;
|
||||
}
|
||||
|
||||
std::size_t handle::size() const noexcept {
|
||||
return impl(p_)->s_;
|
||||
}
|
||||
|
||||
char const * handle::name() const noexcept {
|
||||
return impl(p_)->n_.c_str();
|
||||
}
|
||||
|
||||
std::int32_t handle::ref() const noexcept {
|
||||
return shm::get_ref(impl(p_)->id_);
|
||||
}
|
||||
|
||||
void handle::sub_ref() noexcept {
|
||||
shm::sub_ref(impl(p_)->id_);
|
||||
}
|
||||
|
||||
bool handle::acquire(char const * name, std::size_t size, unsigned mode) {
|
||||
release();
|
||||
impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode);
|
||||
impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
|
||||
return valid();
|
||||
}
|
||||
|
||||
std::int32_t handle::release() {
|
||||
if (impl(p_)->id_ == nullptr) return -1;
|
||||
return shm::release(detach());
|
||||
}
|
||||
|
||||
void* handle::get() const {
|
||||
return impl(p_)->m_;
|
||||
}
|
||||
|
||||
void handle::attach(id_t id) {
|
||||
if (id == nullptr) return;
|
||||
release();
|
||||
impl(p_)->id_ = id;
|
||||
impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
|
||||
}
|
||||
|
||||
id_t handle::detach() {
|
||||
auto old = impl(p_)->id_;
|
||||
impl(p_)->id_ = nullptr;
|
||||
impl(p_)->m_ = nullptr;
|
||||
impl(p_)->s_ = 0;
|
||||
impl(p_)->n_.clear();
|
||||
return old;
|
||||
}
|
||||
|
||||
} // namespace shm
|
||||
} // namespace ipc
|
||||
@@ -1,83 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <utility>
|
||||
#include <string>
|
||||
#include <mutex>
|
||||
#include <atomic>
|
||||
|
||||
#include "libipc/def.h"
|
||||
#include "libipc/mutex.h"
|
||||
#include "libipc/condition.h"
|
||||
#include "libipc/platform/detail.h"
|
||||
|
||||
namespace ipc {
|
||||
namespace detail {
|
||||
|
||||
class waiter {
|
||||
ipc::sync::condition cond_;
|
||||
ipc::sync::mutex lock_;
|
||||
std::atomic<bool> quit_ {false};
|
||||
|
||||
public:
|
||||
static void init();
|
||||
|
||||
waiter() = default;
|
||||
waiter(char const *name) {
|
||||
open(name);
|
||||
}
|
||||
|
||||
~waiter() {
|
||||
close();
|
||||
}
|
||||
|
||||
bool valid() const noexcept {
|
||||
return cond_.valid() && lock_.valid();
|
||||
}
|
||||
|
||||
bool open(char const *name) noexcept {
|
||||
quit_.store(false, std::memory_order_relaxed);
|
||||
if (!cond_.open((std::string{"_waiter_cond_"} + name).c_str())) {
|
||||
return false;
|
||||
}
|
||||
if (!lock_.open((std::string{"_waiter_lock_"} + name).c_str())) {
|
||||
cond_.close();
|
||||
return false;
|
||||
}
|
||||
return valid();
|
||||
}
|
||||
|
||||
void close() noexcept {
|
||||
cond_.close();
|
||||
lock_.close();
|
||||
}
|
||||
|
||||
template <typename F>
|
||||
bool wait_if(F &&pred, std::uint64_t tm = ipc::invalid_value) noexcept {
|
||||
IPC_UNUSED_ std::lock_guard<ipc::sync::mutex> guard {lock_};
|
||||
while ([this, &pred] {
|
||||
return !quit_.load(std::memory_order_relaxed)
|
||||
&& std::forward<F>(pred)();
|
||||
}()) {
|
||||
if (!cond_.wait(lock_, tm)) return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
bool notify() noexcept {
|
||||
std::lock_guard<ipc::sync::mutex>{lock_}; // barrier
|
||||
return cond_.notify(lock_);
|
||||
}
|
||||
|
||||
bool broadcast() noexcept {
|
||||
std::lock_guard<ipc::sync::mutex>{lock_}; // barrier
|
||||
return cond_.broadcast(lock_);
|
||||
}
|
||||
|
||||
bool quit_waiting() {
|
||||
quit_.store(true, std::memory_order_release);
|
||||
return broadcast();
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace detail
|
||||
} // namespace ipc
|
||||
@@ -1,3 +0,0 @@
|
||||
https://github.com/mutouyun/cpp-ipc
|
||||
|
||||
A high-performance inter-process communication library using shared memory on Linux/Windows.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,316 +0,0 @@
|
||||
// jpgd.h - C++ class for JPEG decompression.
|
||||
// Public domain, Rich Geldreich <richgel99@gmail.com>
|
||||
#ifndef JPEG_DECODER_H
|
||||
#define JPEG_DECODER_H
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
#include <setjmp.h>
|
||||
|
||||
namespace jpgd
|
||||
{
|
||||
typedef unsigned char uint8;
|
||||
typedef signed short int16;
|
||||
typedef unsigned short uint16;
|
||||
typedef unsigned int uint;
|
||||
typedef signed int int32;
|
||||
|
||||
// Loads a JPEG image from a memory buffer or a file.
|
||||
// req_comps can be 1 (grayscale), 3 (RGB), or 4 (RGBA).
|
||||
// On return, width/height will be set to the image's dimensions, and actual_comps will be set to the either 1 (grayscale) or 3 (RGB).
|
||||
// Notes: For more control over where and how the source data is read, see the decompress_jpeg_image_from_stream() function below, or call the jpeg_decoder class directly.
|
||||
// Requesting a 8 or 32bpp image is currently a little faster than 24bpp because the jpeg_decoder class itself currently always unpacks to either 8 or 32bpp.
|
||||
// BEGIN EPIC MOD
|
||||
//unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps);
|
||||
unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format);
|
||||
// END EPIC MOD
|
||||
unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps);
|
||||
|
||||
// Success/failure error codes.
|
||||
enum jpgd_status
|
||||
{
|
||||
JPGD_SUCCESS = 0, JPGD_FAILED = -1, JPGD_DONE = 1,
|
||||
JPGD_BAD_DHT_COUNTS = -256, JPGD_BAD_DHT_INDEX, JPGD_BAD_DHT_MARKER, JPGD_BAD_DQT_MARKER, JPGD_BAD_DQT_TABLE,
|
||||
JPGD_BAD_PRECISION, JPGD_BAD_HEIGHT, JPGD_BAD_WIDTH, JPGD_TOO_MANY_COMPONENTS,
|
||||
JPGD_BAD_SOF_LENGTH, JPGD_BAD_VARIABLE_MARKER, JPGD_BAD_DRI_LENGTH, JPGD_BAD_SOS_LENGTH,
|
||||
JPGD_BAD_SOS_COMP_ID, JPGD_W_EXTRA_BYTES_BEFORE_MARKER, JPGD_NO_ARITHMITIC_SUPPORT, JPGD_UNEXPECTED_MARKER,
|
||||
JPGD_NOT_JPEG, JPGD_UNSUPPORTED_MARKER, JPGD_BAD_DQT_LENGTH, JPGD_TOO_MANY_BLOCKS,
|
||||
JPGD_UNDEFINED_QUANT_TABLE, JPGD_UNDEFINED_HUFF_TABLE, JPGD_NOT_SINGLE_SCAN, JPGD_UNSUPPORTED_COLORSPACE,
|
||||
JPGD_UNSUPPORTED_SAMP_FACTORS, JPGD_DECODE_ERROR, JPGD_BAD_RESTART_MARKER, JPGD_ASSERTION_ERROR,
|
||||
JPGD_BAD_SOS_SPECTRAL, JPGD_BAD_SOS_SUCCESSIVE, JPGD_STREAM_READ, JPGD_NOTENOUGHMEM
|
||||
};
|
||||
|
||||
// Input stream interface.
|
||||
// Derive from this class to read input data from sources other than files or memory. Set m_eof_flag to true when no more data is available.
|
||||
// The decoder is rather greedy: it will keep on calling this method until its internal input buffer is full, or until the EOF flag is set.
|
||||
// It the input stream contains data after the JPEG stream's EOI (end of image) marker it will probably be pulled into the internal buffer.
|
||||
// Call the get_total_bytes_read() method to determine the actual size of the JPEG stream after successful decoding.
|
||||
class jpeg_decoder_stream
|
||||
{
|
||||
public:
|
||||
jpeg_decoder_stream() { }
|
||||
virtual ~jpeg_decoder_stream() { }
|
||||
|
||||
// The read() method is called when the internal input buffer is empty.
|
||||
// Parameters:
|
||||
// pBuf - input buffer
|
||||
// max_bytes_to_read - maximum bytes that can be written to pBuf
|
||||
// pEOF_flag - set this to true if at end of stream (no more bytes remaining)
|
||||
// Returns -1 on error, otherwise return the number of bytes actually written to the buffer (which may be 0).
|
||||
// Notes: This method will be called in a loop until you set *pEOF_flag to true or the internal buffer is full.
|
||||
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) = 0;
|
||||
};
|
||||
|
||||
// stdio FILE stream class.
|
||||
class jpeg_decoder_file_stream : public jpeg_decoder_stream
|
||||
{
|
||||
jpeg_decoder_file_stream(const jpeg_decoder_file_stream &);
|
||||
jpeg_decoder_file_stream &operator =(const jpeg_decoder_file_stream &);
|
||||
|
||||
FILE *m_pFile;
|
||||
bool m_eof_flag, m_error_flag;
|
||||
|
||||
public:
|
||||
jpeg_decoder_file_stream();
|
||||
virtual ~jpeg_decoder_file_stream();
|
||||
|
||||
bool open(const char *Pfilename);
|
||||
void close();
|
||||
|
||||
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag);
|
||||
};
|
||||
|
||||
// Memory stream class.
|
||||
class jpeg_decoder_mem_stream : public jpeg_decoder_stream
|
||||
{
|
||||
const uint8 *m_pSrc_data;
|
||||
uint m_ofs, m_size;
|
||||
|
||||
public:
|
||||
jpeg_decoder_mem_stream() : m_pSrc_data(NULL), m_ofs(0), m_size(0) { }
|
||||
jpeg_decoder_mem_stream(const uint8 *pSrc_data, uint size) : m_pSrc_data(pSrc_data), m_ofs(0), m_size(size) { }
|
||||
|
||||
virtual ~jpeg_decoder_mem_stream() { }
|
||||
|
||||
bool open(const uint8 *pSrc_data, uint size);
|
||||
void close() { m_pSrc_data = NULL; m_ofs = 0; m_size = 0; }
|
||||
|
||||
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag);
|
||||
};
|
||||
|
||||
// Loads JPEG file from a jpeg_decoder_stream.
|
||||
unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps);
|
||||
|
||||
enum
|
||||
{
|
||||
JPGD_IN_BUF_SIZE = 8192, JPGD_MAX_BLOCKS_PER_MCU = 10, JPGD_MAX_HUFF_TABLES = 8, JPGD_MAX_QUANT_TABLES = 4,
|
||||
JPGD_MAX_COMPONENTS = 4, JPGD_MAX_COMPS_IN_SCAN = 4, JPGD_MAX_BLOCKS_PER_ROW = 8192, JPGD_MAX_HEIGHT = 16384, JPGD_MAX_WIDTH = 16384
|
||||
};
|
||||
|
||||
typedef int16 jpgd_quant_t;
|
||||
typedef int16 jpgd_block_t;
|
||||
|
||||
class jpeg_decoder
|
||||
{
|
||||
public:
|
||||
// Call get_error_code() after constructing to determine if the stream is valid or not. You may call the get_width(), get_height(), etc.
|
||||
// methods after the constructor is called. You may then either destruct the object, or begin decoding the image by calling begin_decoding(), then decode() on each scanline.
|
||||
jpeg_decoder(jpeg_decoder_stream *pStream);
|
||||
|
||||
~jpeg_decoder();
|
||||
|
||||
// Call this method after constructing the object to begin decompression.
|
||||
// If JPGD_SUCCESS is returned you may then call decode() on each scanline.
|
||||
int begin_decoding();
|
||||
|
||||
// Returns the next scan line.
|
||||
// For grayscale images, pScan_line will point to a buffer containing 8-bit pixels (get_bytes_per_pixel() will return 1).
|
||||
// Otherwise, it will always point to a buffer containing 32-bit RGBA pixels (A will always be 255, and get_bytes_per_pixel() will return 4).
|
||||
// Returns JPGD_SUCCESS if a scan line has been returned.
|
||||
// Returns JPGD_DONE if all scan lines have been returned.
|
||||
// Returns JPGD_FAILED if an error occurred. Call get_error_code() for a more info.
|
||||
int decode(const void** pScan_line, uint* pScan_line_len);
|
||||
|
||||
inline jpgd_status get_error_code() const { return m_error_code; }
|
||||
|
||||
inline int get_width() const { return m_image_x_size; }
|
||||
inline int get_height() const { return m_image_y_size; }
|
||||
|
||||
inline int get_num_components() const { return m_comps_in_frame; }
|
||||
|
||||
inline int get_bytes_per_pixel() const { return m_dest_bytes_per_pixel; }
|
||||
inline int get_bytes_per_scan_line() const { return m_image_x_size * get_bytes_per_pixel(); }
|
||||
|
||||
// Returns the total number of bytes actually consumed by the decoder (which should equal the actual size of the JPEG file).
|
||||
inline int get_total_bytes_read() const { return m_total_bytes_read; }
|
||||
|
||||
private:
|
||||
jpeg_decoder(const jpeg_decoder &);
|
||||
jpeg_decoder &operator =(const jpeg_decoder &);
|
||||
|
||||
typedef void (*pDecode_block_func)(jpeg_decoder *, int, int, int);
|
||||
|
||||
struct huff_tables
|
||||
{
|
||||
bool ac_table;
|
||||
uint look_up[256];
|
||||
uint look_up2[256];
|
||||
uint8 code_size[256];
|
||||
uint tree[512];
|
||||
};
|
||||
|
||||
struct coeff_buf
|
||||
{
|
||||
uint8 *pData;
|
||||
int block_num_x, block_num_y;
|
||||
int block_len_x, block_len_y;
|
||||
int block_size;
|
||||
};
|
||||
|
||||
struct mem_block
|
||||
{
|
||||
mem_block *m_pNext;
|
||||
size_t m_used_count;
|
||||
size_t m_size;
|
||||
char m_data[1];
|
||||
};
|
||||
|
||||
jmp_buf m_jmp_state;
|
||||
mem_block *m_pMem_blocks;
|
||||
int m_image_x_size;
|
||||
int m_image_y_size;
|
||||
jpeg_decoder_stream *m_pStream;
|
||||
int m_progressive_flag;
|
||||
uint8 m_huff_ac[JPGD_MAX_HUFF_TABLES];
|
||||
uint8* m_huff_num[JPGD_MAX_HUFF_TABLES]; // pointer to number of Huffman codes per bit size
|
||||
uint8* m_huff_val[JPGD_MAX_HUFF_TABLES]; // pointer to Huffman codes per bit size
|
||||
jpgd_quant_t* m_quant[JPGD_MAX_QUANT_TABLES]; // pointer to quantization tables
|
||||
int m_scan_type; // Gray, Yh1v1, Yh1v2, Yh2v1, Yh2v2 (CMYK111, CMYK4114 no longer supported)
|
||||
int m_comps_in_frame; // # of components in frame
|
||||
int m_comp_h_samp[JPGD_MAX_COMPONENTS]; // component's horizontal sampling factor
|
||||
int m_comp_v_samp[JPGD_MAX_COMPONENTS]; // component's vertical sampling factor
|
||||
int m_comp_quant[JPGD_MAX_COMPONENTS]; // component's quantization table selector
|
||||
int m_comp_ident[JPGD_MAX_COMPONENTS]; // component's ID
|
||||
int m_comp_h_blocks[JPGD_MAX_COMPONENTS];
|
||||
int m_comp_v_blocks[JPGD_MAX_COMPONENTS];
|
||||
int m_comps_in_scan; // # of components in scan
|
||||
int m_comp_list[JPGD_MAX_COMPS_IN_SCAN]; // components in this scan
|
||||
int m_comp_dc_tab[JPGD_MAX_COMPONENTS]; // component's DC Huffman coding table selector
|
||||
int m_comp_ac_tab[JPGD_MAX_COMPONENTS]; // component's AC Huffman coding table selector
|
||||
int m_spectral_start; // spectral selection start
|
||||
int m_spectral_end; // spectral selection end
|
||||
int m_successive_low; // successive approximation low
|
||||
int m_successive_high; // successive approximation high
|
||||
int m_max_mcu_x_size; // MCU's max. X size in pixels
|
||||
int m_max_mcu_y_size; // MCU's max. Y size in pixels
|
||||
int m_blocks_per_mcu;
|
||||
int m_max_blocks_per_row;
|
||||
int m_mcus_per_row, m_mcus_per_col;
|
||||
int m_mcu_org[JPGD_MAX_BLOCKS_PER_MCU];
|
||||
int m_total_lines_left; // total # lines left in image
|
||||
int m_mcu_lines_left; // total # lines left in this MCU
|
||||
int m_real_dest_bytes_per_scan_line;
|
||||
int m_dest_bytes_per_scan_line; // rounded up
|
||||
int m_dest_bytes_per_pixel; // 4 (RGB) or 1 (Y)
|
||||
huff_tables* m_pHuff_tabs[JPGD_MAX_HUFF_TABLES];
|
||||
coeff_buf* m_dc_coeffs[JPGD_MAX_COMPONENTS];
|
||||
coeff_buf* m_ac_coeffs[JPGD_MAX_COMPONENTS];
|
||||
int m_eob_run;
|
||||
int m_block_y_mcu[JPGD_MAX_COMPONENTS];
|
||||
uint8* m_pIn_buf_ofs;
|
||||
int m_in_buf_left;
|
||||
int m_tem_flag;
|
||||
bool m_eof_flag;
|
||||
uint8 m_in_buf_pad_start[128];
|
||||
uint8 m_in_buf[JPGD_IN_BUF_SIZE + 128];
|
||||
uint8 m_in_buf_pad_end[128];
|
||||
int m_bits_left;
|
||||
uint m_bit_buf;
|
||||
int m_restart_interval;
|
||||
int m_restarts_left;
|
||||
int m_next_restart_num;
|
||||
int m_max_mcus_per_row;
|
||||
int m_max_blocks_per_mcu;
|
||||
int m_expanded_blocks_per_mcu;
|
||||
int m_expanded_blocks_per_row;
|
||||
int m_expanded_blocks_per_component;
|
||||
bool m_freq_domain_chroma_upsample;
|
||||
int m_max_mcus_per_col;
|
||||
uint m_last_dc_val[JPGD_MAX_COMPONENTS];
|
||||
jpgd_block_t* m_pMCU_coefficients;
|
||||
int m_mcu_block_max_zag[JPGD_MAX_BLOCKS_PER_MCU];
|
||||
uint8* m_pSample_buf;
|
||||
int m_crr[256];
|
||||
int m_cbb[256];
|
||||
int m_crg[256];
|
||||
int m_cbg[256];
|
||||
uint8* m_pScan_line_0;
|
||||
uint8* m_pScan_line_1;
|
||||
jpgd_status m_error_code;
|
||||
bool m_ready_flag;
|
||||
int m_total_bytes_read;
|
||||
|
||||
void free_all_blocks();
|
||||
// BEGIN EPIC MOD
|
||||
UE_NORETURN void stop_decoding(jpgd_status status);
|
||||
// END EPIC MOD
|
||||
void *alloc(size_t n, bool zero = false);
|
||||
void word_clear(void *p, uint16 c, uint n);
|
||||
void prep_in_buffer();
|
||||
void read_dht_marker();
|
||||
void read_dqt_marker();
|
||||
void read_sof_marker();
|
||||
void skip_variable_marker();
|
||||
void read_dri_marker();
|
||||
void read_sos_marker();
|
||||
int next_marker();
|
||||
int process_markers();
|
||||
void locate_soi_marker();
|
||||
void locate_sof_marker();
|
||||
int locate_sos_marker();
|
||||
void init(jpeg_decoder_stream * pStream);
|
||||
void create_look_ups();
|
||||
void fix_in_buffer();
|
||||
void transform_mcu(int mcu_row);
|
||||
void transform_mcu_expand(int mcu_row);
|
||||
coeff_buf* coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y);
|
||||
inline jpgd_block_t *coeff_buf_getp(coeff_buf *cb, int block_x, int block_y);
|
||||
void load_next_row();
|
||||
void decode_next_row();
|
||||
void make_huff_table(int index, huff_tables *pH);
|
||||
void check_quant_tables();
|
||||
void check_huff_tables();
|
||||
void calc_mcu_block_order();
|
||||
int init_scan();
|
||||
void init_frame();
|
||||
void process_restart();
|
||||
void decode_scan(pDecode_block_func decode_block_func);
|
||||
void init_progressive();
|
||||
void init_sequential();
|
||||
void decode_start();
|
||||
void decode_init(jpeg_decoder_stream * pStream);
|
||||
void H2V2Convert();
|
||||
void H2V1Convert();
|
||||
void H1V2Convert();
|
||||
void H1V1Convert();
|
||||
void gray_convert();
|
||||
void expanded_convert();
|
||||
void find_eoi();
|
||||
inline uint get_char();
|
||||
inline uint get_char(bool *pPadding_flag);
|
||||
inline void stuff_char(uint8 q);
|
||||
inline uint8 get_octet();
|
||||
inline uint get_bits(int num_bits);
|
||||
inline uint get_bits_no_markers(int numbits);
|
||||
inline int huff_decode(huff_tables *pH);
|
||||
inline int huff_decode(huff_tables *pH, int& extrabits);
|
||||
static inline uint8 clamp(int i);
|
||||
static void decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y);
|
||||
static void decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y);
|
||||
static void decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y);
|
||||
static void decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y);
|
||||
};
|
||||
|
||||
} // namespace jpgd
|
||||
|
||||
#endif // JPEG_DECODER_H
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,172 +0,0 @@
|
||||
|
||||
// jpge.h - C++ class for JPEG compression.
|
||||
// Public domain, Rich Geldreich <richgel99@gmail.com>
|
||||
// Alex Evans: Added RGBA support, linear memory allocator.
|
||||
#ifndef JPEG_ENCODER_H
|
||||
#define JPEG_ENCODER_H
|
||||
|
||||
#include <stdint.h>
|
||||
|
||||
namespace jpge
|
||||
{
|
||||
typedef unsigned char uint8;
|
||||
typedef signed short int16;
|
||||
typedef signed int int32;
|
||||
typedef unsigned short uint16;
|
||||
typedef unsigned int uint32;
|
||||
typedef unsigned int uint;
|
||||
|
||||
// JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common.
|
||||
enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 };
|
||||
|
||||
// JPEG compression parameters structure.
|
||||
struct params
|
||||
{
|
||||
inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { }
|
||||
|
||||
inline bool check_valid() const
|
||||
{
|
||||
if ((m_quality < 1) || (m_quality > 100)) return false;
|
||||
if ((uint)m_subsampling > (uint)H2V2) return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
// Quality: 1-100, higher is better. Typical values are around 50-95.
|
||||
int m_quality;
|
||||
|
||||
// m_subsampling:
|
||||
// 0 = Y (grayscale) only
|
||||
// 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU)
|
||||
// 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU)
|
||||
// 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common)
|
||||
subsampling_t m_subsampling;
|
||||
|
||||
// Disables CbCr discrimination - only intended for testing.
|
||||
// If true, the Y quantization table is also used for the CbCr channels.
|
||||
bool m_no_chroma_discrim_flag;
|
||||
|
||||
bool m_two_pass_flag;
|
||||
};
|
||||
|
||||
// Writes JPEG image to a file.
|
||||
// num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels.
|
||||
bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
|
||||
|
||||
// Writes JPEG image to memory buffer.
|
||||
// On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes.
|
||||
// If return value is true, buf_size will be set to the size of the compressed data.
|
||||
bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
|
||||
|
||||
// Output stream abstract class - used by the jpeg_encoder class to write to the output stream.
|
||||
// put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts.
|
||||
class output_stream
|
||||
{
|
||||
public:
|
||||
virtual ~output_stream() { };
|
||||
virtual bool put_buf(const void* Pbuf, int64_t len) = 0;
|
||||
template<class T> inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); }
|
||||
};
|
||||
|
||||
// Lower level jpeg_encoder class - useful if more control is needed than the above helper functions.
|
||||
class jpeg_encoder
|
||||
{
|
||||
public:
|
||||
jpeg_encoder();
|
||||
~jpeg_encoder();
|
||||
|
||||
// Initializes the compressor.
|
||||
// pStream: The stream object to use for writing compressed data.
|
||||
// params - Compression parameters structure, defined above.
|
||||
// width, height - Image dimensions.
|
||||
// channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data.
|
||||
// Returns false on out of memory or if a stream write fails.
|
||||
bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params());
|
||||
|
||||
const params &get_params() const { return m_params; }
|
||||
|
||||
// Deinitializes the compressor, freeing any allocated memory. May be called at any time.
|
||||
void deinit();
|
||||
|
||||
uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; }
|
||||
inline uint get_cur_pass() { return m_pass_num; }
|
||||
|
||||
// Call this method with each source scanline.
|
||||
// width * src_channels bytes per scanline is expected (RGB or Y format).
|
||||
// You must call with NULL after all scanlines are processed to finish compression.
|
||||
// Returns false on out of memory or if a stream write fails.
|
||||
bool process_scanline(const void* pScanline);
|
||||
|
||||
private:
|
||||
jpeg_encoder(const jpeg_encoder &);
|
||||
jpeg_encoder &operator =(const jpeg_encoder &);
|
||||
|
||||
typedef int32 sample_array_t;
|
||||
|
||||
output_stream *m_pStream;
|
||||
params m_params;
|
||||
uint8 m_num_components;
|
||||
uint8 m_comp_h_samp[3], m_comp_v_samp[3];
|
||||
int m_image_x, m_image_y, m_image_bpp, m_image_bpl;
|
||||
int m_image_x_mcu, m_image_y_mcu;
|
||||
int m_image_bpl_xlt, m_image_bpl_mcu;
|
||||
int m_mcus_per_row;
|
||||
int m_mcu_x, m_mcu_y;
|
||||
uint8 *m_mcu_lines[16];
|
||||
uint8 m_mcu_y_ofs;
|
||||
sample_array_t m_sample_array[64];
|
||||
int16 m_coefficient_array[64];
|
||||
int32 m_quantization_tables[2][64];
|
||||
uint m_huff_codes[4][256];
|
||||
uint8 m_huff_code_sizes[4][256];
|
||||
uint8 m_huff_bits[4][17];
|
||||
uint8 m_huff_val[4][256];
|
||||
uint32 m_huff_count[4][256];
|
||||
int m_last_dc_val[3];
|
||||
enum { JPGE_OUT_BUF_SIZE = 2048 };
|
||||
uint8 m_out_buf[JPGE_OUT_BUF_SIZE];
|
||||
uint8 *m_pOut_buf;
|
||||
uint m_out_buf_left;
|
||||
uint32 m_bit_buffer;
|
||||
uint m_bits_in;
|
||||
uint8 m_pass_num;
|
||||
bool m_all_stream_writes_succeeded;
|
||||
|
||||
void optimize_huffman_table(int table_num, int table_len);
|
||||
void emit_byte(uint8 i);
|
||||
void emit_word(uint i);
|
||||
void emit_marker(int marker);
|
||||
void emit_jfif_app0();
|
||||
void emit_dqt();
|
||||
void emit_sof();
|
||||
void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag);
|
||||
void emit_dhts();
|
||||
void emit_sos();
|
||||
void emit_markers();
|
||||
void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val);
|
||||
void compute_quant_table(int32 *dst, int16 *src);
|
||||
void adjust_quant_table(int32 *dst, int32 *src);
|
||||
void first_pass_init();
|
||||
bool second_pass_init();
|
||||
bool jpg_open(int p_x_res, int p_y_res, int src_channels);
|
||||
void load_block_8_8_grey(int x);
|
||||
void load_block_8_8(int x, int y, int c);
|
||||
void load_block_16_8(int x, int c);
|
||||
void load_block_16_8_8(int x, int c);
|
||||
void load_quantized_coefficients(int component_num);
|
||||
void flush_output_buffer();
|
||||
void put_bits(uint bits, uint len);
|
||||
void code_coefficients_pass_one(int component_num);
|
||||
void code_coefficients_pass_two(int component_num);
|
||||
void code_block(int component_num);
|
||||
void process_mcu_row();
|
||||
bool terminate_pass_one();
|
||||
bool terminate_pass_two();
|
||||
bool process_end_of_image();
|
||||
void load_mcu(const void* src);
|
||||
void clear();
|
||||
void init();
|
||||
};
|
||||
|
||||
} // namespace jpge
|
||||
|
||||
#endif // JPEG_ENCODER
|
||||
@@ -1,3 +0,0 @@
|
||||
jpge.h - C++ class for JPEG compression.
|
||||
Public domain, Rich Geldreich <richgel99@gmail.com>
|
||||
Alex Evans: Added RGBA support, linear memory allocator.
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,433 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <atomic>
|
||||
#include <utility>
|
||||
#include <cstring>
|
||||
#include <type_traits>
|
||||
#include <cstdint>
|
||||
|
||||
#include "libipc/def.h"
|
||||
|
||||
#include "libipc/platform/detail.h"
|
||||
#include "libipc/circ/elem_def.h"
|
||||
#include "libipc/utility/log.h"
|
||||
#include "libipc/utility/utility.h"
|
||||
|
||||
namespace ipc {
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
/// producer-consumer implementation
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
template <typename Flag>
|
||||
struct prod_cons_impl;
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> rd_; // read index
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
||||
|
||||
constexpr circ::u2_t cursor() const noexcept {
|
||||
return 0;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
||||
auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
|
||||
return false; // full
|
||||
}
|
||||
std::forward<F>(f)(&(elems[cur_wt].data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
|
||||
* So we could just disconnect all connections of receiver, and return false.
|
||||
*/
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(~static_cast<circ::cc_t>(0u));
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
|
||||
auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
|
||||
if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
|
||||
return false; // empty
|
||||
}
|
||||
std::forward<F>(f)(&(elems[cur_rd].data_));
|
||||
std::forward<R>(out)(true);
|
||||
rd_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::multi , trans::unicast>>
|
||||
: prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R,
|
||||
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
||||
byte_t buff[DS];
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
||||
if (circ::index_of(cur_rd) ==
|
||||
circ::index_of(wt_.load(std::memory_order_acquire))) {
|
||||
return false; // empty
|
||||
}
|
||||
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
||||
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
||||
std::forward<F>(f)(buff);
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::multi , relat::multi, trans::unicast>>
|
||||
: prod_cons_impl<wr<relat::single, relat::multi, trans::unicast>> {
|
||||
|
||||
using flag_t = std::uint64_t;
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
||||
circ::u2_t cur_ct, nxt_ct;
|
||||
for (unsigned k = 0;;) {
|
||||
cur_ct = ct_.load(std::memory_order_relaxed);
|
||||
if (circ::index_of(nxt_ct = cur_ct + 1) ==
|
||||
circ::index_of(rd_.load(std::memory_order_acquire))) {
|
||||
return false; // full
|
||||
}
|
||||
if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
auto* el = elems + circ::index_of(cur_ct);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
while (1) {
|
||||
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
||||
if (cur_ct != wt_.load(std::memory_order_relaxed)) {
|
||||
return true;
|
||||
}
|
||||
if ((~cac_ct) != cur_ct) {
|
||||
return true;
|
||||
}
|
||||
if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
|
||||
return true;
|
||||
}
|
||||
wt_.store(nxt_ct, std::memory_order_release);
|
||||
cur_ct = nxt_ct;
|
||||
nxt_ct = cur_ct + 1;
|
||||
el = elems + circ::index_of(cur_ct);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&&, E*) {
|
||||
wrapper->elems()->disconnect_receiver(1);
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R,
|
||||
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
||||
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
||||
byte_t buff[DS];
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
||||
auto cur_wt = wt_.load(std::memory_order_acquire);
|
||||
auto id_rd = circ::index_of(cur_rd);
|
||||
auto id_wt = circ::index_of(cur_wt);
|
||||
if (id_rd == id_wt) {
|
||||
auto* el = elems + id_wt;
|
||||
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
||||
if ((~cac_ct) != cur_wt) {
|
||||
return false; // empty
|
||||
}
|
||||
if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
|
||||
wt_.store(cur_wt + 1, std::memory_order_release);
|
||||
}
|
||||
k = 0;
|
||||
}
|
||||
else {
|
||||
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
||||
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
||||
std::forward<F>(f)(buff);
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::single, relat::multi, trans::broadcast>> {
|
||||
|
||||
using rc_t = std::uint64_t;
|
||||
|
||||
enum : rc_t {
|
||||
ep_mask = 0x00000000ffffffffull,
|
||||
ep_incr = 0x0000000100000000ull
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<rc_t> rc_ { 0 }; // read-counter
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
||||
alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
|
||||
|
||||
circ::u2_t cursor() const noexcept {
|
||||
return wt_.load(std::memory_order_acquire);
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & ep_mask;
|
||||
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
|
||||
return false; // has not finished yet
|
||||
}
|
||||
// consider rem_cc to be 0 here
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
epoch_ += ep_incr;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & ep_mask;
|
||||
if (cc & rem_cc) {
|
||||
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
||||
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
||||
if (cc == 0) return false; // no reader
|
||||
}
|
||||
// just compare & exchange
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
wt_.fetch_add(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E>
|
||||
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
|
||||
if (cur == cursor()) return false; // acquire
|
||||
auto* el = elems + circ::index_of(cur++);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
if ((cur_rc & ep_mask) == 0) {
|
||||
std::forward<R>(out)(true);
|
||||
return true;
|
||||
}
|
||||
auto nxt_rc = cur_rc & ~static_cast<rc_t>(wrapper->connected_id());
|
||||
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
||||
std::forward<R>(out)((nxt_rc & ep_mask) == 0);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <>
|
||||
struct prod_cons_impl<wr<relat::multi, relat::multi, trans::broadcast>> {
|
||||
|
||||
using rc_t = std::uint64_t;
|
||||
using flag_t = std::uint64_t;
|
||||
|
||||
enum : rc_t {
|
||||
rc_mask = 0x00000000ffffffffull,
|
||||
ep_mask = 0x00ffffffffffffffull,
|
||||
ep_incr = 0x0100000000000000ull,
|
||||
ic_mask = 0xff000000ffffffffull,
|
||||
ic_incr = 0x0000000100000000ull
|
||||
};
|
||||
|
||||
template <std::size_t DataSize, std::size_t AlignSize>
|
||||
struct elem_t {
|
||||
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
||||
std::atomic<rc_t > rc_ { 0 }; // read-counter
|
||||
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
||||
};
|
||||
|
||||
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
||||
alignas(cache_line_size) std::atomic<rc_t> epoch_ { 0 };
|
||||
|
||||
circ::u2_t cursor() const noexcept {
|
||||
return ct_.load(std::memory_order_acquire);
|
||||
}
|
||||
|
||||
constexpr static rc_t inc_rc(rc_t rc) noexcept {
|
||||
return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
|
||||
}
|
||||
|
||||
constexpr static rc_t inc_mask(rc_t rc) noexcept {
|
||||
return inc_rc(rc) & ~rc_mask;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
circ::u2_t cur_ct;
|
||||
rc_t epoch = epoch_.load(std::memory_order_acquire);
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_relaxed);
|
||||
circ::cc_t rem_cc = cur_rc & rc_mask;
|
||||
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
|
||||
return false; // has not finished yet
|
||||
}
|
||||
else if (!rem_cc) {
|
||||
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
||||
if ((cur_fl != cur_ct) && cur_fl) {
|
||||
return false; // full
|
||||
}
|
||||
}
|
||||
// consider rem_cc to be 0 here
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed) &&
|
||||
epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
|
||||
break;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
// only one thread/process would touch here at one time
|
||||
ct_.store(cur_ct + 1, std::memory_order_release);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename E>
|
||||
bool force_push(W* wrapper, F&& f, E* elems) {
|
||||
E* el;
|
||||
circ::u2_t cur_ct;
|
||||
rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
||||
for (unsigned k = 0;;) {
|
||||
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
||||
if (cc == 0) return false; // no reader
|
||||
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
||||
// check all consumers have finished reading this element
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
circ::cc_t rem_cc = cur_rc & rc_mask;
|
||||
if (cc & rem_cc) {
|
||||
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
||||
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
||||
if (cc == 0) return false; // no reader
|
||||
}
|
||||
// just compare & exchange
|
||||
if (el->rc_.compare_exchange_weak(
|
||||
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed)) {
|
||||
if (epoch == epoch_.load(std::memory_order_acquire)) {
|
||||
break;
|
||||
}
|
||||
else if (push(wrapper, std::forward<F>(f), elems)) {
|
||||
return true;
|
||||
}
|
||||
epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
// only one thread/process would touch here at one time
|
||||
ct_.store(cur_ct + 1, std::memory_order_release);
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
// set flag & try update wt
|
||||
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename W, typename F, typename R, typename E, std::size_t N>
|
||||
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
|
||||
auto* el = elems + circ::index_of(cur);
|
||||
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
||||
if (cur_fl != ~static_cast<flag_t>(cur)) {
|
||||
return false; // empty
|
||||
}
|
||||
++cur;
|
||||
std::forward<F>(f)(&(el->data_));
|
||||
for (unsigned k = 0;;) {
|
||||
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
||||
if ((cur_rc & rc_mask) == 0) {
|
||||
std::forward<R>(out)(true);
|
||||
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
auto nxt_rc = inc_rc(cur_rc) & ~static_cast<rc_t>(wrapper->connected_id());
|
||||
bool last_one = false;
|
||||
if ((last_one = (nxt_rc & rc_mask) == 0)) {
|
||||
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
||||
}
|
||||
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
||||
std::forward<R>(out)(last_one);
|
||||
return true;
|
||||
}
|
||||
ipc::yield(k);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace ipc
|
||||
@@ -1,58 +0,0 @@
|
||||
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU \citep{extendedngpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions \citep{hochreiter2001gradient}. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section~\ref{sec:attention}.
|
||||
|
||||
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations \citep{cheng2016long, decomposableAttnModel, paulus2017deep, lin2017structured}.
|
||||
|
||||
End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks \citep{sukhbaatar2015}.
|
||||
|
||||
To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.
|
||||
In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as \citep{neural_gpu, NalBytenet2017} and \citep{JonasFaceNet2017}.
|
||||
|
||||
|
||||
%\citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
|
||||
|
||||
%For example,! in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at low computation cost, making it an essential ingredient in competitive recurrent models for machine translation.
|
||||
|
||||
%A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
|
||||
|
||||
%After the seminal models introduced in \citep{sutskever14, bahdanau2014neural, cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation (MT) and language modeling with recurrent endoder-decoder and recurrent language models. Recent effort \citep{shazeer2017outrageously} has successfully combined the power of conditional computation with sequence models to train very large models for MT, pushing SOTA at lower computational cost.
|
||||
|
||||
%Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state precludes processing all timesteps at once, instead requiring long sequences of sequential operations. In practice, this results in greatly reduced computational efficiency, as on modern computing hardware, a single operation on a large batch is much faster than a large number of operations on small batches. The problem gets worse at longer sequence lengths. Although sequential computation is not a severe bottleneck at inference time, as autoregressively generating each output requires all previous outputs, the inability to compute scores at all output positions at once hinders us from rapidly training our models over large datasets. Although impressive work such as \citep{Kuchaiev2017Factorization} is able to significantly accelerate the training of LSTMs with factorization tricks, we are still bound by the linear dependence on sequence length.
|
||||
|
||||
%If the model could compute hidden states at each time step using only the inputs and outputs, it would be liberated from the dependence on results from previous time steps during training. This line of thought is the foundation of recent efforts such as the Markovian neural GPU \citep{neural_gpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as a building block to compute hidden representations simultaneously for all timesteps, resulting in $O(1)$ sequential time complexity. \citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
|
||||
|
||||
%A crucial component for accurate sequence prediction is modeling cross-positional communication. For example, in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at a low computation cost, also $O(1)$ sequential time complexity, making it an essential ingredient in recurrent encoder-decoder architectures for MT. A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
|
||||
|
||||
|
||||
|
||||
%Note: Facebook model is no better than RNNs in this regard, since it requires a number of layers proportional to the distance you want to communicate. Bytenet is more promising, since it requires a logarithmnic number of layers (does bytenet have SOTA results)?
|
||||
|
||||
%Note: An attention layer can connect a very large number of positions at a low computation cost in O(1) sequential operations. This is why encoder-decoder attention has been so successful in seq-to-seq models so far. It is only natural, then, to also use attention to connect the timesteps of the same sequence.
|
||||
|
||||
%Note: I wouldn't say that long sequences are not a problem during inference. It would be great if we could infer with no long sequences. We could just say later on that, while our training graph is constant-depth, our model still requires sequential operations in the decoder part during inference due to the autoregressive nature of the model.
|
||||
|
||||
%\begin{table}[h!]
|
||||
%\caption{Attention models are quite efficient for cross-positional communications when sequence length is smaller than channel depth. $n$ represents the sequence length and $d$ represents the channel depth.}
|
||||
%\label{tab:op_complexities}
|
||||
%\begin{center}
|
||||
%\vspace{-5pt}
|
||||
%\scalebox{0.75}{
|
||||
|
||||
%\begin{tabular}{l|c|c|c}
|
||||
%\hline \hline
|
||||
%Layer Type & Receptive & Complexity & Sequential \\
|
||||
% & Field & & Operations \\
|
||||
%\hline
|
||||
%Pointwise Feed-Forward & $1$ & $O(n \cdot d^2)$ & $O(1)$ \\
|
||||
%\hline
|
||||
%Recurrent & $n$ & $O(n \cdot d^2)$ & $O(n)$ \\
|
||||
%\hline
|
||||
%Convolutional & $r$ & $O(r \cdot n \cdot d^2)$ & $O(1)$ \\
|
||||
%\hline
|
||||
%Convolutional (separable) & $r$ & $O(r \cdot n \cdot d + n %\cdot d^2)$ & $O(1)$ \\
|
||||
%\hline
|
||||
%Attention & $r$ & $O(r \cdot n \cdot d)$ & $O(1)$ \\
|
||||
%\hline \hline
|
||||
%\end{tabular}
|
||||
%}
|
||||
%\end{center}
|
||||
%\end{table}
|
||||
@@ -1,18 +0,0 @@
|
||||
Recurrent neural networks, long short-term memory \citep{hochreiter1997} and gated recurrent \citep{gruEval14} neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \citep{sutskever14, bahdanau2014neural, cho2014learning}. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures \citep{wu2016google,luong2015effective,jozefowicz2016exploring}.
|
||||
|
||||
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples.
|
||||
%\marginpar{not sure if the memory constraints are understandable here}
|
||||
Recent work has achieved significant improvements in computational efficiency through factorization tricks \citep{Kuchaiev2017Factorization} and conditional computation \citep{shazeer2017outrageously}, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
|
||||
|
||||
%\marginpar{@all: there is work on analyzing what attention really does in seq2seq models, couldn't find it right away}
|
||||
|
||||
Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences \citep{bahdanau2014neural, structuredAttentionNetworks}. In all but a few cases \citep{decomposableAttnModel}, however, such attention mechanisms are used in conjunction with a recurrent network.
|
||||
|
||||
%\marginpar{not sure if "cross-positional communication" is understandable without explanation}
|
||||
%\marginpar{insert exact training times and stats for the model that reaches sota earliest, maybe even a single GPU model?}
|
||||
|
||||
In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.
|
||||
%\marginpar{you removed the constant number of repetitions part. I wrote it because I wanted to make it clear that the model does not only perform attention once, while it's also not recurrent. I thought that might be important to get across early.}
|
||||
|
||||
% Just a standard paragraph with citations, rewrite.
|
||||
%After the seminal papers of \citep{sutskever14}, \citep{bahdanau2014neural}, and \citep{cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation and language modeling with recurrent sequence models. Recent effort \citep{shazeer2017outrageously} has combined the power of conditional computation with sequence models to train very large models for machine translation, pushing SOTA at lower computational cost. Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state encumbers recurrnet models to process multiple inputs at once, and their time complexity is a linear function of the length of the input and output, both during training and inference. [What I want to say here is that although this is fine during decoding, at training time, we are given both input and output and this linear nature does not allow the RNN to process all inputs and outputs simultaneously and haven't been used on datasets that are the of the scale of the web. What's the largest dataset we have ? . Talk about Nividia and possibly other's effors to speed up things, and possibly other efforts that alleviate this, but are still limited by it's comptuational nature]. Rest of the intro: What if you could construct the state based on the actual inputs and outputs, then you could construct them all at once. This has been the foundation of many promising recent efforts, bytenet,facenet (Also talk about quasi rnn here). Now we talk about attention!! Along with cell architectures such as long short-term meory (LSTM) \citep{hochreiter1997}, and gated recurrent units (GRUs) \citep{cho2014learning}, attention has emerged as an essential ingredient in successful sequence models, in particular for machine translation. In recent years, many, if not all, state-of-the-art (SOTA) results in machine translation have been achieved with attention-based sequence models \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. Talk about the neon work on how it played with attention to do self attention! Then talk about what we do.
|
||||
@@ -1,155 +0,0 @@
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
\includegraphics[scale=0.6]{Figures/ModalNet-21}
|
||||
\caption{The Transformer - model architecture.}
|
||||
\label{fig:model-arch}
|
||||
\end{figure}
|
||||
|
||||
% Although the primary workhorse of our model is attention,
|
||||
%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail.
|
||||
|
||||
Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next.
|
||||
|
||||
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively.
|
||||
|
||||
\subsection{Encoder and Decoder Stacks}
|
||||
|
||||
\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$.
|
||||
|
||||
\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$.
|
||||
|
||||
% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail.
|
||||
|
||||
\subsection{Attention} \label{sec:attention}
|
||||
An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
|
||||
|
||||
\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod}
|
||||
|
||||
% \begin{figure}
|
||||
% \centering
|
||||
% \includegraphics[scale=0.6]{Figures/ModalNet-19}
|
||||
% \caption{Scaled Dot-Product Attention.}
|
||||
% \label{fig:multi-head-att}
|
||||
% \end{figure}
|
||||
|
||||
We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values.
|
||||
|
||||
In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as:
|
||||
|
||||
\begin{equation}
|
||||
\mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V
|
||||
\end{equation}
|
||||
|
||||
The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.
|
||||
|
||||
%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients.
|
||||
|
||||
% Already described in the subsequent section
|
||||
%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$.
|
||||
|
||||
%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model.
|
||||
|
||||
While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$.
|
||||
|
||||
|
||||
%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$.
|
||||
|
||||
|
||||
\subsubsection{Multi-Head Attention} \label{sec:multihead}
|
||||
|
||||
\begin{figure}
|
||||
\begin{minipage}[t]{0.5\textwidth}
|
||||
\centering
|
||||
Scaled Dot-Product Attention \\
|
||||
\vspace{0.5cm}
|
||||
\includegraphics[scale=0.6]{Figures/ModalNet-19}
|
||||
\end{minipage}
|
||||
\begin{minipage}[t]{0.5\textwidth}
|
||||
\centering
|
||||
Multi-Head Attention \\
|
||||
\vspace{0.1cm}
|
||||
\includegraphics[scale=0.6]{Figures/ModalNet-20}
|
||||
\end{minipage}
|
||||
|
||||
|
||||
% \centering
|
||||
|
||||
\caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.}
|
||||
\label{fig:multi-head-att}
|
||||
\end{figure}
|
||||
|
||||
Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively.
|
||||
On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}.
|
||||
|
||||
Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.
|
||||
|
||||
\begin{align*}
|
||||
\mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\
|
||||
% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\
|
||||
\text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\
|
||||
\end{align*}
|
||||
|
||||
Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$.
|
||||
|
||||
|
||||
%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation.
|
||||
|
||||
In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$.
|
||||
Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.
|
||||
|
||||
\subsubsection{Applications of Attention in our Model}
|
||||
|
||||
The Transformer uses multi-head attention in three different ways:
|
||||
\begin{itemize}
|
||||
\item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}.
|
||||
|
||||
\item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.
|
||||
|
||||
\item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}.
|
||||
|
||||
\end{itemize}
|
||||
|
||||
\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn}
|
||||
|
||||
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.
|
||||
|
||||
\begin{equation}
|
||||
\mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2
|
||||
\end{equation}
|
||||
|
||||
While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$.
|
||||
|
||||
|
||||
|
||||
%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention.
|
||||
|
||||
%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention.
|
||||
|
||||
|
||||
%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as
|
||||
%\begin{equation*} \label{eq:attention}
|
||||
% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq).
|
||||
%\end{equation*}
|
||||
%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$.
|
||||
|
||||
%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$.
|
||||
%\marginpar{}
|
||||
|
||||
\subsection{Embeddings and Softmax}
|
||||
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$.
|
||||
|
||||
|
||||
\subsection{Positional Encoding}
|
||||
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}.
|
||||
|
||||
In this work, we use sine and cosine functions of different frequencies:
|
||||
|
||||
\begin{align*}
|
||||
PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\
|
||||
PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel})
|
||||
\end{align*}
|
||||
|
||||
where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$.
|
||||
|
||||
We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.
|
||||
@@ -1,45 +0,0 @@
|
||||
\pagebreak
|
||||
\section*{Two Feed-Forward Layers = Attention over Parameters}\label{sec:parameter_attention}
|
||||
|
||||
In addition to attention layers, our model contains position-wise feed-forward networks (Section \ref{sec:ffn}), which consist of two linear transformations with a ReLU activation in between. In fact, these networks too can be seen as a form of attention. Compare the formula for such a network with the formula for a simple dot-product attention layer (biases and scaling factors omitted):
|
||||
|
||||
\begin{align*}
|
||||
FFN(x, W_1, W_2) = ReLU(xW_1)W_2 \\
|
||||
A(q, K, V) = Softmax(qK^T)V
|
||||
\end{align*}
|
||||
|
||||
Based on the similarity of these formulae, the two-layer feed-forward network can be seen as a kind of attention, where the keys and values are the rows of the trainable parameter matrices $W_1$ and $W_2$, and where we use ReLU instead of Softmax in the compatibility function.
|
||||
|
||||
%the compatablity function is $compat(q, k_i) = ReLU(q \cdot k_i)$ instead of $Softmax(qK_T)_i$.
|
||||
|
||||
Given this similarity, we experimented with replacing the position-wise feed-forward networks with attention layers similar to the ones we use everywhere else our model. The multi-head-attention-over-parameters sublayer is identical to the multi-head attention described in \ref{sec:multihead}, except that the "keys" and "values" inputs to each attention head are trainable model parameters, as opposed to being linear projections of a previous layer. These parameters are scaled up by a factor of $\sqrt{d_{model}}$ in order to be more similar to activations.
|
||||
|
||||
In our first experiment, we replaced each position-wise feed-forward network with a multi-head-attention-over-parameters sublayer with $h_p=8$ heads, key-dimensionality $d_{pk}=64$, and value-dimensionality $d_{pv}=64$, using $n_p=1536$ key-value pairs for each attention head. The sublayer has a total of $2097152$ parameters, including the parameters in the query projection and the output projection. This matches the number of parameters in the position-wise feed-forward network that we replaced. While the theoretical amount of computation is also the same, in practice, the attention version caused the step times to be about 30\% longer.
|
||||
|
||||
In our second experiment, we used $h_p=8$ heads, and $n_p=512$ key-value pairs for each attention head, again matching the total number of parameters in the base model.
|
||||
|
||||
Results for the first experiment were slightly worse than for the base model, and results for the second experiment were slightly better, see Table~\ref{tab:parameter_attention}.
|
||||
|
||||
\begin{table}[h]
|
||||
\caption{Replacing the position-wise feed-forward networks with multihead-attention-over-parameters produces similar results to the base model. All metrics are on the English-to-German translation development set, newstest2013.}
|
||||
\label{tab:parameter_attention}
|
||||
\begin{center}
|
||||
\vspace{-2mm}
|
||||
%\scalebox{1.0}{
|
||||
\begin{tabular}{c|cccccc|cccc}
|
||||
\hline\rule{0pt}{2.0ex}
|
||||
& \multirow{2}{*}{$\dmodel$} & \multirow{2}{*}{$\dff$} &
|
||||
\multirow{2}{*}{$h_p$} & \multirow{2}{*}{$d_{pk}$} & \multirow{2}{*}{$d_{pv}$} &
|
||||
\multirow{2}{*}{$n_p$} &
|
||||
PPL & BLEU & params & training\\
|
||||
& & & & & & & (dev) & (dev) & $\times10^6$ & time \\
|
||||
\hline\rule{0pt}{2.0ex}
|
||||
base & 512 & 2048 & & & & & 4.92 & 25.8 & 65 & 12 hours\\
|
||||
\hline\rule{0pt}{2.0ex}
|
||||
AOP$_1$ & 512 & & 8 & 64 & 64 & 1536 & 4.92& 25.5 & 65 & 16 hours\\
|
||||
AOP$_2$ & 512 & & 16 & 64 & 64 & 512 & \textbf{4.86} & \textbf{25.9} & 65 & 16 hours \\
|
||||
\hline
|
||||
\end{tabular}
|
||||
%}
|
||||
\end{center}
|
||||
\end{table}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user