Compare commits

...

125 Commits

Author SHA1 Message Date
6543
1112fef93d Changelog v1.14.2 (#15794)
* changelog tool generate

* format & add

Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>
2021-05-09 11:26:49 +02:00
6543
af11549fb2 Ensure that ctx.Written is checked after issues(...) calls (#15797) (#15798)
Fix issue noted in #15783

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2021-05-09 09:48:52 +01:00
zeripath
76d6184cd0 Display conflict-free merge messages for pull requests (#15773) (#15796)
Backport #15773

Repositories using external issue tracker tend to use numeric issues in
commits. To prevent conflicts during issue reference parsing or inside
commit hooks, this change respects these configuration and uses the !
character to refer to pull requests in merge commit messages.

For repositories using squash merges, this was already handled.

Signed-off-by: JustusBunsi <61625851+justusbunsi@users.noreply.github.com>
Co-authored-by: zeripath <art27@cantab.net>

Co-authored-by: Steven <61625851+justusbunsi@users.noreply.github.com>
2021-05-09 10:32:48 +08:00
6543
d644709b22 Exponential Backoff for ByteFIFO (#15724) (#15793)
This PR is another in the vein of queue improvements. It suggests an
exponential backoff for bytefifo queues to reduce the load from queue
polling. This will mostly be useful for redis queues.

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: Lauris BH <lauris@nix.lv>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
2021-05-08 14:27:00 -04:00
6543
30584a6df8 [API] make change repo settings work on empty repos (#15778) (#15789)
* API: Fix #15602

* Add TEST
2021-05-08 15:14:42 +02:00
6543
78710946f2 Use pulls in commit graph unless pulls are disabled (#15734 & #15740 & #15774) (#15775)
* Commit Graph: Pull-Requests should not link to issues (#15734)

Use `/pulls` and simplify code.

* reverse #15734 partial and comment (#15740)

* reverse & comment

* Update templates/repo/graph/commits.tmpl

Co-authored-by: 6543 <6543@obermui.de>

Co-authored-by: zeripath <art27@cantab.net>

* Use pulls in commit graph unless pulls are disabled

Fix #15370

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: KN4CK3R <KN4CK3R@users.noreply.github.com>
Co-authored-by: zeripath <art27@cantab.net>
2021-05-07 15:12:24 -04:00
6543
22d700edfd Set GIT_DIR correctly if it is not set (#15751) (#15769)
* Set GIT_DIR correctly if it is not set

* Expand out templates

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
2021-05-07 20:01:25 +02:00
zeripath
6782a64a4a Defer closing the gitrepo until the end of the wrapped context functions (#15653) (#15746)
* Defer closing the gitrepo until the end of the wrapped context functions (#15653)

Backport #15653

There was a mistake in #15372 where deferral of gitrepo close occurs before it should.

This PR fixes this.
2021-05-07 18:28:02 +02:00
zeripath
1ec11ac87e Drop back to use IsAnInteractiveSession for SVC (#15749) (#15762)
Backport #15749

* Drop back to use IsAnInteractiveSession for SVC

There is an apparent permission change problem when using
IsWindowsService to determine if the SVC manager should be
used.

This PR simply drops back to using IsAnInteractiveSession as
this does not change behaviour.

Fix #15454

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Yes staticcheck I know this is deprecated

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Just leave me alone lint

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: 6543 <6543@obermui.de>
2021-05-07 17:44:35 +02:00
6543
2c2a30d6bb Fix bug where repositories appear unadopted (#15757) (#15767)
Fix bug where repositories with capital letters in their names appear unadopted.

Fix #15755

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-05-07 17:07:39 +02:00
6543
717b313c34 not show ref-in-new-issue pop when issue was disabled (#15761) (#15765)
fix #15718

Signed-off-by: a1012112796 <1012112796@qq.com>
Co-authored-by: a1012112796 <1012112796@qq.com>
2021-05-07 16:13:20 +02:00
6543
0a32861b28 Issue list alignment tweaks (#15483) (#15766)
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
2021-05-07 15:06:19 +02:00
zeripath
52ca7b9b65 Fix setting version table in dump (#15753) (#15759)
Backport #15753

* Fix setting version table in dump

As noted on Discord there is a problem with gitea dump where the version table
is not being dumped correctly.

This is due to a missing pointer in the TableInfo.

This PR fixes this.

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Update models_test.go
2021-05-07 14:04:17 +02:00
zeripath
e078d08ecd Fix close button change on delete in simplemde area (#15737) (#15747)
Backport #15737

* Fix close button change on delete in simplemde area

Fix issue with close button changing when deleting in the simplemde textarea.

Signed-off-by: Andrew Thornton <art27@cantab.net>

* apply suggestion

Co-authored-by: 6543 <6543@obermui.de>

Co-authored-by: 6543 <6543@obermui.de>
2021-05-06 23:14:15 +01:00
a1012112796
a83fb3a83a fix some ui bug about draft release (#15137) (#15745)
* fix some ui bug about draft release

- should not show draft release in tag list because
  it will't create real tag
- still show draft release without tag and commit message
  for draft release instead of 404 error
- remove tag load for attachement links because it's useless

Signed-off-by: a1012112796 <1012112796@qq.com>

* add test code

* fix test

That's because has added a new release in relaese test database.

* fix dropdown link for draft release
2021-05-06 21:23:26 +02:00
Tomás Warynyca
f9b1fac4ea Fix webkit calendar icon color on arc-green (#15728) 2021-05-05 13:10:01 +08:00
6543
f1e8b8c0d7 Only log Error on getLastCommitStatus error to let pull list still be visible (#15715) 2021-05-04 14:03:31 +02:00
Kyle D
dbbb75712d Move tooltip down to allow selection of Remove File on error (#15672) (#15714) 2021-05-04 07:00:29 +01:00
zeripath
462c6fdee2 Fix setting redis db path (#15698) (#15708)
Backport #15698

There is a bug setting the redis db in the common nosql manager whereby the db path
always fails.

This PR fixes this.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-05-03 22:30:30 +01:00
Kyle D
cead819cb5 Implement delete release attachments and update release attachments' name (#14130) (#15666)
* Implement delete release attachment

* Add attachments on release edit page

* Fix bug

* Finish del release attachments

* Fix frontend lint

* Fix tests

* Support edit release attachments

* Added tests

* Remove the unnecessary parameter isCreate from UpdateReleaseOrCreatReleaseFromTag

* Rename UpdateReleaseOrCreatReleaseFromTag to UpdateRelease

* Fix middle align

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-05-03 13:27:00 -04:00
zeripath
4fa2804238 Performance improvement for last commit cache and show-ref (#15455) (#15701)
Backport #15455

* Improve performance when there are multiple commits in the last commit cache

* read refs directly if we can

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-05-03 16:51:41 +02:00
zeripath
3ce46a7fbd Fix DB session cleanup (#15697) (#15700)
Backport #15697

The DB session clean up needs to check expiry not created_unix.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-05-02 10:43:01 +01:00
6543
15886ce048 Fixed several activation bugs (#15473) (#15685)
* Removed unneeded form tag.

* Fixed typo.

* Fixed NPE.

* Use better error page.

* Splitted GET and POST.

Co-authored-by: KN4CK3R <KN4CK3R@users.noreply.github.com>
2021-04-30 20:14:36 -04:00
6543
a725d31496 Delete references if repository gets deleted (#15681) (#15684)
* Remove DeletedBranch and LFSLocks.

* Sort beans.

Co-authored-by: KN4CK3R <KN4CK3R@users.noreply.github.com>
Co-authored-by: zeripath <art27@cantab.net>
2021-05-01 00:09:58 +02:00
6543
8e27f6e814 Fix orphaned objects deletion bug (#15657) (#15683)
* Fix orphaned objects deletion bug

* extend test

Co-authored-by: 6543 <6543@obermui.de>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
2021-04-30 22:27:26 +01:00
KN4CK3R
54263ff123 Delete protected branch if repository gets removed (#15658) (#15676)
* Added missing error parameters.

* Delete protected branch if repository gets removed.

* Added doctor fix.
2021-04-30 19:59:42 +01:00
6543
3bde297121 [API] pull notification subject status: add "merged" (#15344) (#15654)
Current subject status can be "", "open" and "closed". This add "merged" to it.
2021-04-28 20:24:56 +01:00
zeripath
0dfde367c1 Remove spurious set name from eventsource.sharedworker.js (#15643) (#15652)
Backport #15643

Fix #15617

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-28 19:50:56 +02:00
zeripath
875501584b not update updated uinx for git gc (#15637) (#15641)
Backport #15637

fix #15634

Signed-off-by: a1012112796 <1012112796@qq.com>

Co-authored-by: a1012112796 <1012112796@qq.com>
2021-04-28 03:20:47 +03:00
zeripath
4190c134e6 Fix commit graph author link (#15627) (#15630)
Backport #15627

The author link on the commit graph is incorrect and isn't providing a link to the author.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-26 20:25:51 +01:00
Lunny Xiao
cae46216e4 fix webhook timeout bug (#15613) (#15621)
* Also fix the potential problem in httplib
2021-04-26 14:42:12 +02:00
techknowlogick
761111f9ed Resolve panic on failed interface conversion in migration v156 (#15604) (#15610)
go panics otherwise with `panic: interface conversion: error is git.ErrNotExist, not *git.ErrNotExist`, thanks to Codeberg/Andi for reporting this.

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-04-25 11:58:42 -04:00
Nathan Smith
57f1476093 Bump unrolled/render to v1.1.0 (#15581) (#15608)
v1.1.0 has improved buffer pooling
2021-04-25 14:01:52 +08:00
Lunny Xiao
bdba89452d Fix missing storage init (#15589) (#15598) 2021-04-23 20:56:21 +08:00
zeripath
6e2dacfef6 If the default branch is not present do not report error on stats indexing (follow-up of #15546) (#15583) (#15594)
Backport #15546
Backport #15583

 #15546 doesn't completely fix this problem because the error returned is an ObjectNotExist
error not a BranchNotExist error.

Add test for ErrObjectNotExist too

Fix #15257

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-22 22:30:18 +02:00
Lunny Xiao
c0869c295a Fix lfs management find (#15537) (#15578)
* Fix lfs management find (#15537)

Fix #15236

* Do not do 40byte conversion within ParseTreeLine
* Missed a to40ByteSHA

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Andrew Thornton <art27@cantab.net>

* Remove space

Co-authored-by: Andrew Thornton <art27@cantab.net>
2021-04-22 20:32:48 +02:00
zeripath
a719311f6d Add placeholder text to deploy key textarea (#15575) (#15576)
Backport #15575

Add placeholder text to deploy key textarea

Related #15574

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-21 23:59:50 +02:00
zeripath
248b67af6f Fix NPE on view commit with notes (#15561) (#15573)
Backport #15561

Fix #15558

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-21 15:11:43 -04:00
silverwind
990c6089db Project board improvements (#15429) (#15560)
* Project board improvements

- Fix link colors
- Extract CSS to own file
- Various minor tweaks to make it look better

Fixes: https://github.com/go-gitea/gitea/issues/15424
Fixes: https://github.com/go-gitea/gitea/issues/15506
Fixes: https://github.com/go-gitea/gitea/pull/15511

* fix squashed cards on small view area

* more css fixes, add second row from issue list

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2021-04-20 20:45:00 +01:00
KN4CK3R
5da024a019 Add ETag header (#15370) (#15552)
* Add ETag header.

* Comply with RFC 7232.

* Moved logic into httpcache.go

* Changed name.

* Lint

* Implemented If-None-Match list.

* Fixed missing header on *

* Removed weak etag support.

* Removed * support.

* Added unit test.

* Lint

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2021-04-20 12:01:58 -04:00
Lunny Xiao
eff2499be7 Fix bug on commit graph (#15517) (#15530) 2021-04-17 14:46:30 +02:00
zeripath
4a3c6384ac Send size to /avatars if requested (#15459) (#15528)
Backport #15459

If an avatar is requested in a particular size ensure that /avatars also gets the size request

Fix #15453

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: 6543 <6543@obermui.de>
2021-04-17 12:30:58 +01:00
zeripath
2b1989e59f Prevent migration 156 failure if tag commit missing (#15519) (#15527)
Backport #15519

It is possible that tag commits could be deleted or missing from repos. This causes
migration 156 to fail and breaks upgrade.

This PR simply logs the failure.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-17 12:13:15 +02:00
Mike L
340c4fc7c7 Repo branch page: label size, PR ref, new PR button alignment (#15363) (#15365) 2021-04-16 07:53:51 +02:00
6543
918d3d96ff Changelog v1.14.1 (#15498)
* RAW Changelog v1.14.1

* wordings

* Apply suggestions from code review

Co-authored-by: techknowlogick <matti@mdranta.net>

* Update CHANGELOG.md

Co-authored-by: 6543 <6543@obermui.de>

* Update CHANGELOG.md

Co-authored-by: 6543 <6543@obermui.de>

Co-authored-by: techknowlogick <matti@mdranta.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2021-04-15 22:19:09 -04:00
6543
92c91d7d8b Performance improvement for list pull requests (#15447) (#15500)
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-04-16 01:14:14 +03:00
zeripath
9dc76b2036 Fix bug clone wiki (#15499) (#15502)
Backport #15499

Fix #15494

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
2021-04-15 21:40:10 +02:00
zeripath
802a4314ef dump: Add option to skip LFS/attachment files (#15407) (#15492)
Backport #15407

* Add option to skip dumping LFS/attachment files

* Fix fmt issues

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>

Co-authored-by: Johan Van de Wauw <johan@gisky.be>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
2021-04-15 18:41:47 +03:00
zeripath
edd4ab49c8 Ensure review dismissal only dismisses the correct review (#15477) (#15489)
Backport #15477

Fix #15472

Signed-off-by: Andrew Thornton art27@cantab.net

Co-authored-by: 6543 <6543@obermui.de>
2021-04-15 18:24:59 +03:00
zeripath
55e6cde7c1 Use subdir for URL (#15446) (#15493)
Backport #15446

Fixes #15444

Co-authored-by: KN4CK3R <KN4CK3R@users.noreply.github.com>

Co-authored-by: KN4CK3R <KN4CK3R@users.noreply.github.com>
2021-04-15 18:24:30 +03:00
6543
729fa06468 migration: github: if rate limit is not enabled, ignore it (#15490) (#15495)
Co-authored-by: Lauris BH <lauris@nix.lv>
2021-04-15 18:24:01 +03:00
zeripath
b228a0aa44 Use index of the supported tags to choose user lang (#15452) (#15488)
Backport #15452

Fix #14793.

The previous implementation used the first return value of matcher.Match, which is the chosen language tag but may contain extensions such as de-DE-u-rg-chzzzz.

As mentioned in the documentation of language package, matcher.Match also returns the index of the supported tags, so I think it is better to use it rather than manipulate the returned language tag.

Co-authored-by: Naohisa Murakami <tiqwab.ch90@gmail.com>
2021-04-15 16:47:43 +02:00
Lunny Xiao
9e7e11224f Fix potential copy lfs records failure when fork a repository (#15441) (#15485) 2021-04-15 16:13:14 +02:00
zeripath
85880b2a0b Query the DB for the hash before inserting in to email_hash (#15457) (#15491)
Backport #15457

Some postgres users have logging which logs even failed transactions. So
just query the db before trying to insert.

Fix #15451

Signed-off-by: Andrew Thornton art27@cantab.net
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-04-15 09:29:13 -04:00
zeripath
211bb911e3 Build go-git variants for windows (#15482) (#15487)
Backport #15482

It appears that there are significant performance problems with the pure git backend
on windows.

Therefore until we can sort this out - provide go-git backend builds.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-15 13:21:27 +01:00
silverwind
0554d1dd01 Lock down build-images dependencies (#15480)
Partial extraction from #15479 for 1.14. Locks down build-images
dependencies and adds missing node_modules dependency.
2021-04-15 12:02:57 +01:00
zeripath
00e55dd223 Prevent superfluous response.WriteHeader (#15456) (#15476)
Backport #15456

This PR simply checks the status before writing the header.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-15 11:02:42 +01:00
a1012112796
b28c3245cc fix wrong file link in code search page (#15466) (#15486)
in previous the grenrated link is
``testg/testrepo/src/commit/....``
which is not right.

the right version is ``/testg/testrepo/.......``
(start wiht ``/``)
or ``http://127.0.0.1:3000/xxxxx`` (full link)

to make it hase same result with explore page
I choose the secound style.

fix #15438

Signed-off-by: a1012112796 <1012112796@qq.com>

Co-authored-by: 6543 <6543@obermui.de>

Co-authored-by: 6543 <6543@obermui.de>
2021-04-15 12:04:25 +03:00
silverwind
ddfb729168 Clone panel fixes (#15436)
- Use <button> over <div> for a button
- Fix absent border-right on wiki
- Fix absent border-radius on wiki
2021-04-14 22:16:33 +01:00
John Olheiser
6ef62e3f8e quick fix (#15464) (#15481)
Signed-off-by: jolheiser <john.olheiser@gmail.com>
2021-04-14 20:42:30 +01:00
zeripath
2c4f1ed13e Fix ambiguous argument error on tags (#15432) (#15474)
Backport #15432

There is a weird gotcha with GetTagCommitID that because it uses git rev-list
can cause an ambiguous argument error.

This PR simply makes tags use the same code as branches.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-14 14:53:01 -04:00
techknowlogick
fa3fe1e28a v172 migration adds created_unix field instead of expiry (#15458) (#15463)
The Session table must have an Expiry field not a created_unix field - somehow
this migration adds the incorrect named field leading to #15445 reports.

Fix #15445

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2021-04-14 08:03:42 +02:00
techknowlogick
62f5cf4386 Fix repository search (#15428) (#15442)
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>

Co-authored-by: KN4CK3R <KN4CK3R@users.noreply.github.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-04-13 12:30:28 +08:00
techknowlogick
779d1185e7 Prevent NPE on avatar direct rendering if federated avatars disabled (#15434) (#15439)
#13649 assumed that direct avatar urls would always be libravatar urls - this leads
to NPEs if federated avatar service is disabled.

Fix #15421

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: zeripath <art27@cantab.net>
2021-04-12 22:50:07 -04:00
silverwind
f3d0c76afc Fix wiki clone urls (#15430) (#15431)
Fix wiki clone urls

Regressed by: 9a4050f1e8
Fixes: https://github.com/go-gitea/gitea/issues/15420
2021-04-12 23:59:56 +02:00
Tomás Warynyca
5a4729d5e2 fix dingtalk icon url (#15426)
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2021-04-12 11:10:49 -04:00
zeripath
88a7349375 Standardise icon on projects PR page (#15387) (#15408)
Backport #15387

Fix #15272

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-12 10:05:20 +02:00
6543
c3398906a1 use repo1_bare to test against (#15402) (#15404) 2021-04-11 19:48:35 +02:00
Mike L
330fa75945 Use semantic dropdown for code search query type (#15276) (#15364)
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
2021-04-11 11:50:03 -04:00
6543
55e159ca5f Changelog v1.14.0 (#15360)
* clean & merge & update v1.14.0 changelog

* backport v1.13.x changelogs
2021-04-11 06:07:02 +02:00
Lunny Xiao
87074ec860 Fix delete nonexist oauth application 500 and prevent deadlock (#15384) (#15396)
* Fix delete nonexist oauth application 500

* Fix test

* Close the session

* Fix more missed sess.Close

* Remove unnecessary blank line

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
2021-04-11 04:57:44 +02:00
zeripath
1fe5fe419e Always set the merge base used to merge the commit (#15352) (#15385)
Backport #15352

The issue is that the TestPatch will reset the PR MergeBase - and it is possible for TestPatch to update the MergeBase whilst a merge is ongoing. The ensuing merge will then complete but it doesn't re-set the MergeBase it used to merge the PR.

Fixes the intermittent error in git test.

Signed-off-by: Andrew Thornton art27@cantab.net
2021-04-10 14:08:30 +02:00
zeripath
67a12b8fac Turn RepoRef and RepoAssignment back into func(*Context) (#15372) (#15377)
Backport #15372

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-04-09 22:24:40 -04:00
silverwind
e861dcbbaf Dropzone styling improvements (#15291) (#15374)
* Dropzone styling improvements

- Move all dropzone styles to separate file
- Fix white background in arc-green
- Fix rendering of non-square images and previews

* increase thumbnail quality, set contain in js, replace blur effect with opacity

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2021-04-09 19:43:36 -04:00
zeripath
53c2136a9a Upgrade to bluemonday 1.0.7 (#15379) (#15380)
* Upgrade to bluemonday 1.0.7 (#15379)

Backport #15379

Fix #15349

Signed-off-by: Andrew Thornton <art27@cantab.net>

* resolve CI

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2021-04-09 19:41:30 -04:00
zeripath
24ebc7e517 Move FCGI req.URL.Path fix-up to the FCGI listener (#15292) (#15361)
Backport #15292

Simplify the web.go FCGI path by moving the req.URL.Path fix-up to listener

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-09 17:45:02 +01:00
6543
b072907987 Fix admin user list (#15358) (#15359)
* Fix `admin user list` (#15358)

* fix routers/api/v1/repo/issue.go
2021-04-09 12:39:40 +02:00
silverwind
942b0360ad Fix button border issue (#15351) 2021-04-09 05:38:06 +02:00
silverwind
1ec4913add Disable cssnano's colormin plugin (#15348)
It produces odd rgba values which also seem to cause issues in monaco's
color parser where the scoll shadow went red for some reason.

Regression by: https://github.com/go-gitea/gitea/pull/15333
2021-04-09 03:54:24 +02:00
zeripath
16e34025b4 Show diff on rename with diff changes (#15338) (#15339)
Backport #15338

More recent versions of git have increased support for detection of renames meaning
that a rename with diff changes is now supported.

Although ParsePatch supports this - our templates do not and the simplest solution
is simply to show the diff.

Fix #15335

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: 6543 <6543@obermui.de>
2021-04-08 15:36:17 -04:00
zeripath
456d63b6cf Prepend AppSubUrl to links for default avatar (#15341) (#15342)
Backport #15341

Fix #15334

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-08 17:51:10 +01:00
zeripath
798ac3f85a Fix handling of logout event (#15323) (#15337)
Backport #15323

It appears that there is a slight bug in the handling of the data of logout event -
the javascript should be testing the data field of the data field for the logout
instruction.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-08 17:28:30 +02:00
silverwind
460093b952 Monaco improvements (#15333) (#15345)
- Create theme at runtime which follows the CSS variables of the site
- Disable a few opinionated Monaco defaults like minimap and word highlights
- Move styles to separate file
2021-04-08 13:24:23 +02:00
6543
38d184d518 Fix CanCreateRepo check (#15311) (#15321)
Signed-off-by: jolheiser <john.olheiser@gmail.com>
Co-authored-by: John Olheiser <john.olheiser@gmail.com>
2021-04-07 22:14:11 +02:00
6543
80b55263d8 Fix xorm log stack level (#15285) (#15316)
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-04-07 08:36:15 +01:00
6543
32232db55f Reduce memory usage in testgit (#15306) (#15310)
* reduce memory use in rawtest

* just use hashsum for diffs

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2021-04-07 11:07:39 +08:00
KN4CK3R
cf9b6c281f Close file on invalid range (Addition to #15166) (#15268) (#15308)
* Close file on invalid range.

* Close on seek error

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Moved 'Seek' into server.

* io.ReadSeekCloser is only available in Go 1.16

Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>

Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2021-04-06 15:25:31 -04:00
6543
a8c6a4a70e Fix bug in Wrap (#15302) (#15309)
Whilst doing other work I have noticed that there is an issue with Wrap when passing an
http.Handler - the next should be the next handler in line not empty.

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: zeripath <art27@cantab.net>
2021-04-06 18:44:24 +02:00
6543
e6050e80f7 Update to bluemonday-1.0.6 (#15294) (#15297)
Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2021-04-06 01:36:58 +01:00
zeripath
3803b15d76 Drop the event source if we are unauthorized (#15275) (#15280)
Backport #15275

A previous commit that sent unauthorized if the user is unauthorized
simply leads to the repeated reopening of the eventsource. #

This PR changes the event returned to tell the client to close the
eventsource and thus prevents the repeated reopening.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-04 20:39:22 -04:00
zeripath
af73e1ee35 Add size to Save function (#15264) (#15270)
This PR proposes an alternative solution to #15255 - just add the size to the
save function. Yes it is less apparently clean but it may be more correct.

Close #15255
Fix #15253

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-04-04 12:04:55 -04:00
silverwind
4bc8dfc6a3 Branch page and misc css improvements (#15208) (#15274)
- Improve branches page, increase icon size, use octicons, use css vars
- Style placeholder color via css var
- Slightly increase contrast of input fields and active/hover states
- Add styling for select boxes in arc-green
2021-04-04 16:31:54 +03:00
techknowlogick
33c4e246fe update golang libraries (#15258) (#15259) 2021-04-03 10:42:18 +02:00
KN4CK3R
8ec7beb9f4 Fix graph pagination (#15225) (#15249)
* Fixed invalid HTML tag.

* Fixed pagination.

* Update templates/repo/graph/commits.tmpl

Co-authored-by: zeripath <art27@cantab.net>
2021-04-02 04:29:14 +01:00
a1012112796
c6eb9b30ae response 404 for diff/patch of a commit that not exist (#15221) (#15237)
* response 404 for diff/patch of a commit that not exist

fix #15217

Signed-off-by: a1012112796 <1012112796@qq.com>

* Update routers/repo/commit.go

Co-authored-by: silverwind <me@silverwind.io>

* use ctx.NotFound()

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: silverwind <me@silverwind.io>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: 6543 <6543@obermui.de>
2021-04-01 19:57:05 -04:00
zeripath
f75a9b27b0 Speed up enry.IsVendor (#15213) (#15245)
Backport #15213

`enry.IsVendor` is kinda slow as it simply iterates across all regexps.
This PR ajdusts the regexps to combine them to make this process a
little quicker.

Related #15143

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-02 01:16:00 +02:00
zeripath
2705696d4d Prevent NPE in CommentMustAsDiff if no hunk header (#15199) (#15200)
Backport #15199

I do not understand how this can happen or why.

There is an apparent possibility for a comment.Patch to be missing a hunk header
- this should not happen and do not understand how. But it appears to happen on
1.13 at least in some case.

This PR will simply add a new section if the cursection is empty
thus preventing the NPE.

Fix #15198

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: 6543 <6543@obermui.de>
2021-04-01 15:14:56 -04:00
mayswind
2b68f66e0e Fix timezone bug when clicking heatmap (#15141) (#15231) 2021-04-01 18:22:54 +08:00
silverwind
5c7d30cf52 Diff box fixes (#15214) (#15227)
- Fix misaligned "Show Outdated" buttons via flexbox
- Add hover effect to "Show Outdated" buttons
- Remove overreaching margin from selector .diff-file-box and handle
  cases individually.

Fixes: https://github.com/go-gitea/gitea/issues/15097

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2021-04-01 08:04:47 +03:00
zeripath
e520dff4da Improve /api/v1/repos/issues/search by just getting repo ids (#15179) (#15192)
Backport #15179

/api/v1/repos/issues/search is a highly inefficient search which is unfortunately
the basis for our dependency searching algorithm. In particular it currently loads
all of the repositories and their owners and their primary coding language all of
which is immediately thrown away.

This PR makes one simple change - just get the IDs.

Related #14560
Related #12827

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-01 01:15:08 +02:00
zeripath
2bc759518e Fix regression from #14623 - use debug SVC handler only on interactive sessions (#15210) (#15211)
Backport #15210

Unfortunately #14623 changed from the deprecated IsInteractiveSession to
IsWindowsService without recognising that they are the complement of
each other.

This means that Windows SVC control is not working correctly. This PR
adds some Tracing statements but also fixes the bug.

Fix #15159

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-31 20:49:46 +01:00
a1012112796
92b2883058 add 'fonts' into 'KnownPublicEntries' (#15188) (#15218)
fix #15184

Signed-off-by: a1012112796 <1012112796@qq.com>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-03-31 04:56:19 +02:00
silverwind
0ebfc1405c Fix webhook delivery and issue checklist for arc-green (#15195) (#15204)
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-03-30 19:15:12 +08:00
silverwind
fd5c67226e Fix margin between avatars on org pages (#15194) (#15197)
Fixes: https://github.com/go-gitea/gitea/issues/15191
2021-03-29 23:36:00 +02:00
a1012112796
61308825a6 should run RetrieveRepoMetas() for empty pr (#15187) (#15190)
Signed-off-by: a1012112796 <1012112796@qq.com>
2021-03-29 17:02:01 +01:00
Norwin
0cccad04f0 fix org navbar (#15174)
Co-authored-by: Jimmy Praet <jimmy.praet@telenet.be>
2021-03-27 15:57:02 +01:00
zeripath
a0e5c49ac3 Clusterfuzz found another way (#15160) (#15168)
Backport #15160

Clusterfuzz found another way so I found another way to stop it

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-26 22:48:38 -04:00
sotho
3558310c1f Fix wrong user returned in API (#15139) (#15151)
The API call: GET /repos/{owner}/{repo}/pulls/{index}/reviews/{id}/comments
returns always the reviewer, but should return the poster.

Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>
2021-03-26 04:20:52 +01:00
zeripath
e99534cfd2 Fix Migration 176 yet again (#15132)
Backport #15131

Whilst creating a test for v176 in the migrations_test PR
it has become clear that this was still wrong.

This is now fixed. Genuinely.

Also fix repo transfer

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-23 23:18:05 +00:00
6543
27acf6165e update changelog for rc2 release (#15130) 2021-03-23 15:52:43 -04:00
zeripath
f286a28568 Fix consistency check (#15120) (#15128)
In my last fix I missed adding the label_ prefix to the
consistency check count.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-23 20:20:34 +01:00
6543
b5c4cb1bde Fix bug on avatar middleware (#15124) (#15126)
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-03-23 18:44:37 +00:00
6543
26b98417ad [Vendor] update gitea-sdk v0.14.0 (#15103) (#15107)
* upgraded code.gitea.io/sdk/gitea v0.13.2 => v0.14.0

* rm workaround
2021-03-23 10:10:32 +00:00
zeripath
8b0cf88c0c Changelog for v1.14-rc2 (#15115)
Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-22 22:00:51 +01:00
zeripath
23db3375df Fix another clusterfuzz identified issue (#15096) (#15113)
Backport #15096

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: 6543 <6543@obermui.de>
2021-03-22 15:16:08 -04:00
zeripath
14011d77c9 Fix the v176 migration (#15110) (#15111)
Backport #15110

There is a serious issue with the v176 migration where there is a mistaken missing
label_id selection.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-22 14:47:58 -04:00
silverwind
5519e26c2f Fix lock modal content rendering outside modal (#15095) (#15100)
* Fix lock modal content rendering outside modal

The .content was not a child to .modal so was rendering outside. This is
a recent regression but I'm not certain when it was introduced.

* remove extraneous closing div

Co-authored-by: zeripath <art27@cantab.net>
2021-03-22 02:00:42 +01:00
zeripath
6feb435867 Place wrapper around comment as diff to catch panics (#15085) (#15094)
Backport #15085

There are a few recurrent issues with comment as diff reporting panics that are resistant to fixing due to the fact that the panic occurs in the template render and is swallowed by the template renderer.

This PR just adds some logging to force the panic to properly logged and re-propagates back up to the template renderer so we can actually detect what the issue is.

Signed-off-by: Andrew Thornton art27@cantab.net
2021-03-21 23:41:40 +01:00
silverwind
61444ed8ca Fix markdown rendering in milestone content (#15056) (#15091)
- Add missing markdown class for rendered markdown.
- Increase font size of milestone name in list.

Fixes: https://github.com/go-gitea/gitea/issues/15046
2021-03-21 18:57:06 +01:00
6543
d770cc9886 Remove possible resource leak (#15067) (#15082)
* move "copy uploaded lfs files 2 repo" to own function for "defer file.Close()"

* rm type overload

Co-authored-by: zeripath <art27@cantab.net>
2021-03-21 17:07:37 +01:00
a1012112796
fbaa01998a fix double 'push tag' action feed (#15078) (#15083)
Signed-off-by: a1012112796 <1012112796@qq.com>
2021-03-21 14:51:31 +00:00
Lauris BH
ac2ae66ae7 Handle unauthorized user events gracefully (#15071) (#15074) 2021-03-21 10:21:28 +00:00
Lauris BH
ed60fe0986 Update release date (#15065)
* Update release date

* Remove unneeded entry
2021-03-20 21:29:01 +08:00
zeripath
29e0d62790 Update to goldmark 1.3.3 (#15059) (#15060)
Backport #15059

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-20 12:24:09 +01:00
6543
7b464fa67b Fix bug when upload on web (#15042) (#15054)
* Fix bug when upload on web

* move into own function

Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
2021-03-20 09:37:57 +08:00
343 changed files with 8787 additions and 4996 deletions

View File

@@ -110,3 +110,7 @@ issues:
- text: "exitAfterDefer:" - text: "exitAfterDefer:"
linters: linters:
- gocritic - gocritic
- path: modules/graceful/manager_windows.go
linters:
- staticcheck
text: "svc.IsAnInteractiveSession is deprecated: Use IsWindowsService instead."

View File

@@ -4,14 +4,96 @@ This changelog goes through all the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.io). been added to each release, please refer to the [blog](https://blog.gitea.io).
## [1.14.0-RC1](https://github.com/go-gitea/gitea/releases/tag/v1.14.0) - 2021-03-19 ## [1.14.2](https://github.com/go-gitea/gitea/releases/tag/v1.14.2) - 2021-05-08
* API
* Make change repo settings work on empty repos (#15778) (#15789)
* Add pull "merged" notification subject status to API (#15344) (#15654)
* BUGFIXES
* Ensure that ctx.Written is checked after issues(...) calls (#15797) (#15798)
* Use pulls in commit graph unless pulls are disabled (#15734 & #15740 & #15774) (#15775)
* Set GIT_DIR correctly if it is not set (#15751) (#15769)
* Fix bug where repositories appear unadopted (#15757) (#15767)
* Not show `ref-in-new-issue` pop when issue was disabled (#15761) (#15765)
* Drop back to use IsAnInteractiveSession for SVC (#15749) (#15762)
* Fix setting version table in dump (#15753) (#15759)
* Fix close button change on delete in simplemde area (#15737) (#15747)
* Defer closing the gitrepo until the end of the wrapped context functions (#15653) (#15746)
* Fix some ui bug about draft release (#15137) (#15745)
* Only log Error on getLastCommitStatus error to let pull list still be visible (#15716) (#15715)
* Move tooltip down to allow selection of Remove File on error (#15672) (#15714)
* Fix setting redis db path (#15698) (#15708)
* Fix DB session cleanup (#15697) (#15700)
* Fixed several activation bugs (#15473) (#15685)
* Delete references if repository gets deleted (#15681) (#15684)
* Fix orphaned objects deletion bug (#15657) (#15683)
* Delete protected branch if repository gets removed (#15658) (#15676)
* Remove spurious set name from eventsource.sharedworker.js (#15643) (#15652)
* Not update updated uinx for `git gc` (#15637) (#15641)
* Fix commit graph author link (#15627) (#15630)
* Fix webhook timeout bug (#15613) (#15621)
* Resolve panic on failed interface conversion in migration v156 (#15604) (#15610)
* Fix missing storage init (#15589) (#15598)
* If the default branch is not present do not report error on stats indexing (#15546 & #15583) (#15594)
* Fix lfs management find (#15537) (#15578)
* Fix NPE on view commit with notes (#15561) (#15573)
* Fix bug on commit graph (#15517) (#15530)
* Send size to /avatars if requested (#15459) (#15528)
* Prevent migration 156 failure if tag commit missing (#15519) (#15527)
* ENHANCEMENTS
* Display conflict-free merge messages for pull requests (#15773) (#15796)
* Exponential Backoff for ByteFIFO (#15724) (#15793)
* Issue list alignment tweaks (#15483) (#15766)
* Implement delete release attachments and update release attachments' name (#14130) (#15666)
* Add placeholder text to deploy key textarea (#15575) (#15576)
* Project board improvements (#15429) (#15560)
* Repo branch page: label size, PR ref, new PR button alignment (#15363) (#15365)
* MISC
* Fix webkit calendar icon color on arc-green (#15713) (#15728)
* Performance improvement for last commit cache and show-ref (#15455) (#15701)
* Bump unrolled/render to v1.1.0 (#15581) (#15608)
* Add ETag header (#15370) (#15552)
## [1.14.1](https://github.com/go-gitea/gitea/releases/tag/v1.14.1) - 2021-04-15
* BUGFIXES
* Fix bug clone wiki (#15499) (#15502)
* Github Migration ignore rate limit, if not enabled (#15490) (#15495)
* Use subdir for URL (#15446) (#15493)
* Query the DB for the hash before inserting in to email_hash (#15457) (#15491)
* Ensure review dismissal only dismisses the correct review (#15477) (#15489)
* Use index of the supported tags to choose user lang (#15452) (#15488)
* Fix wrong file link in code search page (#15466) (#15486)
* Quick template fix for built-in SSH server in admin config (#15464) (#15481)
* Prevent superfluous response.WriteHeader (#15456) (#15476)
* Fix ambiguous argument error on tags (#15432) (#15474)
* Add created_unix instead of expiry to migration (#15458) (#15463)
* Fix repository search (#15428) (#15442)
* Prevent NPE on avatar direct rendering if federated avatars disabled (#15434) (#15439)
* Fix wiki clone urls (#15430) (#15431)
* Fix dingtalk icon url at webhook (#15417) (#15426)
* Standardise icon on projects PR page (#15387) (#15408)
* ENHANCEMENTS
* Add option to skip LFS/attachment files for `dump` (#15407) (#15492)
* Clone panel fixes (#15436)
* Use semantic dropdown for code search query type (#15276) (#15364)
* BUILD
* Build go-git variants for windows (#15482) (#15487)
* Lock down build-images dependencies (Partial #15479) (#15480)
* MISC
* Performance improvement for list pull requests (#15447) (#15500)
* Fix potential copy lfs records failure when fork a repository (#15441) (#15485)
## [1.14.0](https://github.com/go-gitea/gitea/releases/tag/v1.14.0) - 2021-04-11
* SECURITY * SECURITY
* Respect approved email domain list for externally validated user registration (#15014) * Respect approved email domain list for externally validated user registration (#15014)
* Add reverse proxy configuration support for remote IP address detection (#14959) * Add reverse proxy configuration support for remote IP address detection (#14959)
* Ensure validation occurs on clone addresses too (#14994) * Ensure validation occurs on clone addresses too (#14994)
* Fix several render issues highlighted during fuzzing (#14986)
* BREAKING * BREAKING
* Fix double 'push tag' action feed (#15078) (#15083)
* Remove possible resource leak (#15067) (#15082)
* Handle unauthorized user events gracefully (#15071) (#15074)
* Restore Access.log following migration to Chi framework (Stops access logging of /api/internal routes) (#14475) * Restore Access.log following migration to Chi framework (Stops access logging of /api/internal routes) (#14475)
* Migrate from Macaron to Chi framework (#14293) * Migrate from Macaron to Chi framework (#14293)
* Deprecate building for mips (#14174) * Deprecate building for mips (#14174)
@@ -42,6 +124,7 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Dump github/gitlab/gitea repository data to a local directory and restore to gitea (#12244) * Dump github/gitlab/gitea repository data to a local directory and restore to gitea (#12244)
* Create Rootless Docker image (#10154) * Create Rootless Docker image (#10154)
* API * API
* Speedup issue search (#15179) (#15192)
* Get pull, return head branch sha, even if deleted (#14931) * Get pull, return head branch sha, even if deleted (#14931)
* Export LFS & TimeTracking function status (#14753) * Export LFS & TimeTracking function status (#14753)
* Show Gitea version in swagger (#14654) * Show Gitea version in swagger (#14654)
@@ -66,6 +149,20 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Add more filters to issues search (#13514) * Add more filters to issues search (#13514)
* Add review request api (#11355) * Add review request api (#11355)
* BUGFIXES * BUGFIXES
* Fix delete nonexist oauth application 500 and prevent deadlock (#15384) (#15396)
* Always set the merge base used to merge the commit (#15352) (#15385)
* Upgrade to bluemonday 1.0.7 (#15379) (#15380)
* Turn RepoRef and RepoAssignment back into func(*Context) (#15372) (#15377)
* Move FCGI req.URL.Path fix-up to the FCGI listener (#15292) (#15361)
* Show diff on rename with diff changes (#15338) (#15339)
* Fix handling of logout event (#15323) (#15337)
* Fix CanCreateRepo check (#15311) (#15321)
* Fix xorm log stack level (#15285) (#15316)
* Fix bug in Wrap (#15302) (#15309)
* Drop the event source if we are unauthorized (#15275) (#15280)
* Backport Fix graph pagination (#15225) (#15249)
* Prevent NPE in CommentMustAsDiff if no hunk header (#15199) (#15200)
* should run RetrieveRepoMetas() for empty pr (#15187) (#15190)
* Move setting to enable closing issue via commit in non default branch to repo settings (#14965) * Move setting to enable closing issue via commit in non default branch to repo settings (#14965)
* Show correct issues for team dashboard (#14952) * Show correct issues for team dashboard (#14952)
* Ensure that new pull request button works on forked forks owned by owner of the root and reduce ambiguity (#14932) * Ensure that new pull request button works on forked forks owned by owner of the root and reduce ambiguity (#14932)
@@ -122,6 +219,9 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Use GO variable in go-check target (#13146) (#13147) * Use GO variable in go-check target (#13146) (#13147)
* ENHANCEMENTS * ENHANCEMENTS
* UI style improvements * UI style improvements
* Dropzone styling improvements (#15291) (#15374)
* Add size to Save function (#15264) (#15270)
* Monaco improvements (#15333) (#15345)
* Support .mailmap in code activity stats (#15009) * Support .mailmap in code activity stats (#15009)
* Sort release attachments by name (#15008) * Sort release attachments by name (#15008)
* Add ui.explore settings to control view of explore pages (#14094) * Add ui.explore settings to control view of explore pages (#14094)
@@ -267,6 +367,52 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Reduce make verbosity (#13803) * Reduce make verbosity (#13803)
* Add git command error directory on log (#13194) * Add git command error directory on log (#13194)
## [1.13.7](https://github.com/go-gitea/gitea/releases/tag/v1.13.7) - 2021-04-07
* SECURITY
* Update to bluemonday-1.0.6 (#15294) (#15298)
* Clusterfuzz found another way (#15160) (#15169)
* API
* Fix wrong user returned in API (#15139) (#15150)
* BUGFIXES
* Add 'fonts' into 'KnownPublicEntries' (#15188) (#15317)
* Speed up `enry.IsVendor` (#15213) (#15246)
* Response 404 for diff/patch of a commit that not exist (#15221) (#15238)
* Prevent NPE in CommentMustAsDiff if no hunk header (#15199) (#15201)
* MISC
* Add size to Save function (#15264) (#15271)
## [1.13.6](https://github.com/go-gitea/gitea/releases/tag/v1.13.6) - 2021-03-23
* SECURITY
* Fix bug on avatar middleware (#15124) (#15125)
* Fix another clusterfuzz identified issue (#15096) (#15114)
* API
* Fix nil pointer exception in get pull reviews API (#15106)
* BUGFIXES
* Fix markdown rendering in milestone content (#15056) (#15092)
## [1.13.5](https://github.com/go-gitea/gitea/releases/tag/v1.13.5) - 2021-03-21
* SECURITY
* Update to goldmark 1.3.3 (#15059) (#15061)
* Another clusterfuzz spotted issue (#15032) (#15034)
* API
* Fix set milestone on PR creation (#14981) (#15001)
* Prevent panic when editing forked repos by API (#14960) (#14963)
* BUGFIXES
* Fix bug when upload on web (#15042) (#15055)
* Delete Labels & IssueLabels on Repo Delete too (#15039) (#15051)
* Fix postgres ID sequences broken by recreate-table (#15015) (#15029)
* Fix several render issues (#14986) (#15013)
* Make sure sibling images get a link too (#14979) (#14995)
* Fix Anchor jumping with escaped query components (#14969) (#14977)
* Fix release mail html template (#14976)
* Fix excluding more than two labels on issues list (#14962) (#14973)
* Don't mark each comment poster as OP (#14971) (#14972)
* Add "captcha" to list of reserved usernames (#14930)
* Re-enable import local paths after reversion from #13610 (#14925) (#14927)
## [1.13.4](https://github.com/go-gitea/gitea/releases/tag/v1.13.4) - 2021-03-07 ## [1.13.4](https://github.com/go-gitea/gitea/releases/tag/v1.13.4) - 2021-03-07
* SECURITY * SECURITY

View File

@@ -577,6 +577,9 @@ release-windows: | $(DIST_DIRS)
$(GO) install src.techknowlogick.com/xgo@latest; \ $(GO) install src.techknowlogick.com/xgo@latest; \
fi fi
CGO_CFLAGS="$(CGO_CFLAGS)" xgo -go $(XGO_VERSION) -buildmode exe -dest $(DIST)/binaries -tags 'netgo osusergo $(TAGS)' -ldflags '-linkmode external -extldflags "-static" $(LDFLAGS)' -targets 'windows/*' -out gitea-$(VERSION) . CGO_CFLAGS="$(CGO_CFLAGS)" xgo -go $(XGO_VERSION) -buildmode exe -dest $(DIST)/binaries -tags 'netgo osusergo $(TAGS)' -ldflags '-linkmode external -extldflags "-static" $(LDFLAGS)' -targets 'windows/*' -out gitea-$(VERSION) .
ifeq (,$(findstring gogit,$(TAGS)))
CGO_CFLAGS="$(CGO_CFLAGS)" xgo -go $(XGO_VERSION) -buildmode exe -dest $(DIST)/binaries -tags 'netgo osusergo gogit $(TAGS)' -ldflags '-linkmode external -extldflags "-static" $(LDFLAGS)' -targets 'windows/*' -out gitea-$(VERSION)-gogit .
endif
ifeq ($(CI),drone) ifeq ($(CI),drone)
cp /build/* $(DIST)/binaries cp /build/* $(DIST)/binaries
endif endif
@@ -699,8 +702,8 @@ generate-gitignore:
GO111MODULE=on $(GO) run build/generate-gitignores.go GO111MODULE=on $(GO) run build/generate-gitignores.go
.PHONY: generate-images .PHONY: generate-images
generate-images: generate-images: | node_modules
npm install --no-save --no-package-lock fabric imagemin-zopfli npm install --no-save --no-package-lock fabric@4 imagemin-zopfli@7
node build/generate-images.js $(TAGS) node build/generate-images.js $(TAGS)
.PHONY: generate-manpage .PHONY: generate-manpage

View File

@@ -21,6 +21,7 @@ import (
pwd "code.gitea.io/gitea/modules/password" pwd "code.gitea.io/gitea/modules/password"
repo_module "code.gitea.io/gitea/modules/repository" repo_module "code.gitea.io/gitea/modules/repository"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/storage"
"github.com/urfave/cli" "github.com/urfave/cli"
) )
@@ -489,6 +490,10 @@ func runDeleteUser(c *cli.Context) error {
return err return err
} }
if err := storage.Init(); err != nil {
return err
}
var err error var err error
var user *models.User var user *models.User
if c.IsSet("email") { if c.IsSet("email") {

View File

@@ -129,6 +129,14 @@ It can be used for backup and capture Gitea server image to send to maintainer`,
Name: "skip-custom-dir", Name: "skip-custom-dir",
Usage: "Skip custom directory", Usage: "Skip custom directory",
}, },
cli.BoolFlag{
Name: "skip-lfs-data",
Usage: "Skip LFS data",
},
cli.BoolFlag{
Name: "skip-attachment-data",
Usage: "Skip attachment data",
},
cli.GenericFlag{ cli.GenericFlag{
Name: "type", Name: "type",
Value: outputTypeEnum, Value: outputTypeEnum,
@@ -214,7 +222,9 @@ func runDump(ctx *cli.Context) error {
fatal("Failed to include repositories: %v", err) fatal("Failed to include repositories: %v", err)
} }
if err := storage.LFS.IterateObjects(func(objPath string, object storage.Object) error { if ctx.IsSet("skip-lfs-data") && ctx.Bool("skip-lfs-data") {
log.Info("Skip dumping LFS data")
} else if err := storage.LFS.IterateObjects(func(objPath string, object storage.Object) error {
info, err := object.Stat() info, err := object.Stat()
if err != nil { if err != nil {
return err return err
@@ -313,7 +323,9 @@ func runDump(ctx *cli.Context) error {
} }
} }
if err := storage.Attachments.IterateObjects(func(objPath string, object storage.Object) error { if ctx.IsSet("skip-attachment-data") && ctx.Bool("skip-attachment-data") {
log.Info("Skip dumping attachment data")
} else if err := storage.Attachments.IterateObjects(func(objPath string, object storage.Object) error {
info, err := object.Stat() info, err := object.Stat()
if err != nil { if err != nil {
return err return err

View File

@@ -9,9 +9,11 @@ import (
"net" "net"
"net/http" "net/http"
"net/http/fcgi" "net/http/fcgi"
"strings"
"code.gitea.io/gitea/modules/graceful" "code.gitea.io/gitea/modules/graceful"
"code.gitea.io/gitea/modules/log" "code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting"
) )
func runHTTP(network, listenAddr, name string, m http.Handler) error { func runHTTP(network, listenAddr, name string, m http.Handler) error {
@@ -48,7 +50,12 @@ func runFCGI(network, listenAddr, name string, m http.Handler) error {
fcgiServer := graceful.NewServer(network, listenAddr, name) fcgiServer := graceful.NewServer(network, listenAddr, name)
err := fcgiServer.ListenAndServe(func(listener net.Listener) error { err := fcgiServer.ListenAndServe(func(listener net.Listener) error {
return fcgi.Serve(listener, m) return fcgi.Serve(listener, http.HandlerFunc(func(resp http.ResponseWriter, req *http.Request) {
if setting.AppSubURL != "" {
req.URL.Path = strings.TrimPrefix(req.URL.Path, setting.AppSubURL)
}
m.ServeHTTP(resp, req)
}))
}) })
if err != nil { if err != nil {
log.Fatal("Failed to start FCGI main server: %v", err) log.Fatal("Failed to start FCGI main server: %v", err)

14
go.mod
View File

@@ -5,7 +5,7 @@ go 1.14
require ( require (
cloud.google.com/go v0.78.0 // indirect cloud.google.com/go v0.78.0 // indirect
code.gitea.io/gitea-vet v0.2.1 code.gitea.io/gitea-vet v0.2.1
code.gitea.io/sdk/gitea v0.13.2 code.gitea.io/sdk/gitea v0.14.0
gitea.com/go-chi/binding v0.0.0-20210301195521-1fe1c9a555e7 gitea.com/go-chi/binding v0.0.0-20210301195521-1fe1c9a555e7
gitea.com/go-chi/cache v0.0.0-20210110083709-82c4c9ce2d5e gitea.com/go-chi/cache v0.0.0-20210110083709-82c4c9ce2d5e
gitea.com/go-chi/captcha v0.0.0-20210110083842-e7696c336a1e gitea.com/go-chi/captcha v0.0.0-20210110083842-e7696c336a1e
@@ -86,7 +86,7 @@ require (
github.com/mgechev/revive v1.0.3 github.com/mgechev/revive v1.0.3
github.com/mholt/acmez v0.1.3 // indirect github.com/mholt/acmez v0.1.3 // indirect
github.com/mholt/archiver/v3 v3.5.0 github.com/mholt/archiver/v3 v3.5.0
github.com/microcosm-cc/bluemonday v1.0.4 github.com/microcosm-cc/bluemonday v1.0.7
github.com/miekg/dns v1.1.40 // indirect github.com/miekg/dns v1.1.40 // indirect
github.com/minio/md5-simd v1.1.2 // indirect github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/minio-go/v7 v7.0.10 github.com/minio/minio-go/v7 v7.0.10
@@ -122,13 +122,13 @@ require (
github.com/unknwon/com v1.0.1 github.com/unknwon/com v1.0.1
github.com/unknwon/i18n v0.0.0-20200823051745-09abd91c7f2c github.com/unknwon/i18n v0.0.0-20200823051745-09abd91c7f2c
github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae
github.com/unrolled/render v1.0.3 github.com/unrolled/render v1.1.0
github.com/urfave/cli v1.22.5 github.com/urfave/cli v1.22.5
github.com/willf/bitset v1.1.11 // indirect github.com/willf/bitset v1.1.11 // indirect
github.com/xanzy/go-gitlab v0.44.0 github.com/xanzy/go-gitlab v0.44.0
github.com/xanzy/ssh-agent v0.3.0 // indirect github.com/xanzy/ssh-agent v0.3.0 // indirect
github.com/yohcop/openid-go v1.0.0 github.com/yohcop/openid-go v1.0.0
github.com/yuin/goldmark v1.3.2 github.com/yuin/goldmark v1.3.3
github.com/yuin/goldmark-highlighting v0.0.0-20200307114337-60d527fdb691 github.com/yuin/goldmark-highlighting v0.0.0-20200307114337-60d527fdb691
github.com/yuin/goldmark-meta v1.0.0 github.com/yuin/goldmark-meta v1.0.0
go.jolheiser.com/hcaptcha v0.0.4 go.jolheiser.com/hcaptcha v0.0.4
@@ -136,9 +136,9 @@ require (
go.uber.org/multierr v1.6.0 // indirect go.uber.org/multierr v1.6.0 // indirect
go.uber.org/zap v1.16.0 // indirect go.uber.org/zap v1.16.0 // indirect
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83 golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110 golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93 golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93
golang.org/x/sys v0.0.0-20210228012217-479acdf4ea46 golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44
golang.org/x/text v0.3.5 golang.org/x/text v0.3.5
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba // indirect golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba // indirect
golang.org/x/tools v0.1.0 golang.org/x/tools v0.1.0
@@ -153,5 +153,3 @@ require (
) )
replace github.com/hashicorp/go-version => github.com/6543/go-version v1.2.4 replace github.com/hashicorp/go-version => github.com/6543/go-version v1.2.4
replace github.com/microcosm-cc/bluemonday => github.com/lunny/bluemonday v1.0.5-0.20201227154428-ca34796141e8

25
go.sum
View File

@@ -38,8 +38,8 @@ cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RX
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0= cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
code.gitea.io/gitea-vet v0.2.1 h1:b30by7+3SkmiftK0RjuXqFvZg2q4p68uoPGuxhzBN0s= code.gitea.io/gitea-vet v0.2.1 h1:b30by7+3SkmiftK0RjuXqFvZg2q4p68uoPGuxhzBN0s=
code.gitea.io/gitea-vet v0.2.1/go.mod h1:zcNbT/aJEmivCAhfmkHOlT645KNOf9W2KnkLgFjGGfE= code.gitea.io/gitea-vet v0.2.1/go.mod h1:zcNbT/aJEmivCAhfmkHOlT645KNOf9W2KnkLgFjGGfE=
code.gitea.io/sdk/gitea v0.13.2 h1:wAnT/J7Z62q3fJXbgnecoaOBh8CM1Qq0/DakWxiv4yA= code.gitea.io/sdk/gitea v0.14.0 h1:m4J352I3p9+bmJUfS+g0odeQzBY/5OXP91Gv6D4fnJ0=
code.gitea.io/sdk/gitea v0.13.2/go.mod h1:lee2y8LeV3kQb2iK+hHlMqoadL4bp27QOkOV/hawLKg= code.gitea.io/sdk/gitea v0.14.0/go.mod h1:89WiyOX1KEcvjP66sRHdu0RafojGo60bT9UqW17VbWs=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
gitea.com/go-chi/binding v0.0.0-20210301195521-1fe1c9a555e7 h1:xCVJPY823C8RWpgMabTw2kOglDrg0iS3GcQU6wdwHkU= gitea.com/go-chi/binding v0.0.0-20210301195521-1fe1c9a555e7 h1:xCVJPY823C8RWpgMabTw2kOglDrg0iS3GcQU6wdwHkU=
gitea.com/go-chi/binding v0.0.0-20210301195521-1fe1c9a555e7/go.mod h1:AyfTrwtfYN54R/HmVvMYPnSTenH5bVoyh8x6tBluxEA= gitea.com/go-chi/binding v0.0.0-20210301195521-1fe1c9a555e7/go.mod h1:AyfTrwtfYN54R/HmVvMYPnSTenH5bVoyh8x6tBluxEA=
@@ -196,8 +196,6 @@ github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chi-middleware/proxy v1.1.1 h1:4HaXUp8o2+bhHr1OhVy+VjN0+L7/07JDcn6v7YrTjrQ= github.com/chi-middleware/proxy v1.1.1 h1:4HaXUp8o2+bhHr1OhVy+VjN0+L7/07JDcn6v7YrTjrQ=
github.com/chi-middleware/proxy v1.1.1/go.mod h1:jQwMEJct2tz9VmtCELxvnXoMfa+SOdikvbVJVHv/M+0= github.com/chi-middleware/proxy v1.1.1/go.mod h1:jQwMEJct2tz9VmtCELxvnXoMfa+SOdikvbVJVHv/M+0=
github.com/chris-ramon/douceur v0.2.0 h1:IDMEdxlEUUBYBKE4z/mJnFyVXox+MjuEVDJNN27glkU=
github.com/chris-ramon/douceur v0.2.0/go.mod h1:wDW5xjJdeoMm1mRt4sD4c/LbF/mWdEpRXQKjTR8nIBE=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
@@ -776,8 +774,6 @@ github.com/libdns/libdns v0.2.0 h1:ewg3ByWrdUrxrje8ChPVMBNcotg7H9LQYg+u5De2RzI=
github.com/libdns/libdns v0.2.0/go.mod h1:yQCXzk1lEZmmCPa857bnk4TsOiqYasqpyOEeSObbb40= github.com/libdns/libdns v0.2.0/go.mod h1:yQCXzk1lEZmmCPa857bnk4TsOiqYasqpyOEeSObbb40=
github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-bc2310a04743/go.mod h1:qklhhLq1aX+mtWk9cPHPzaBjWImj5ULL6C7HFJtXQMM= github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-bc2310a04743/go.mod h1:qklhhLq1aX+mtWk9cPHPzaBjWImj5ULL6C7HFJtXQMM=
github.com/lightstep/lightstep-tracer-go v0.18.1/go.mod h1:jlF1pusYV4pidLvZ+XD0UBX0ZE6WURAspgAczcDHrL4= github.com/lightstep/lightstep-tracer-go v0.18.1/go.mod h1:jlF1pusYV4pidLvZ+XD0UBX0ZE6WURAspgAczcDHrL4=
github.com/lunny/bluemonday v1.0.5-0.20201227154428-ca34796141e8 h1:1omo92DLtxQu6VwVPSZAmduHaK5zssed6cvkHyl1XOg=
github.com/lunny/bluemonday v1.0.5-0.20201227154428-ca34796141e8/go.mod h1:8iwZnFn2CDDNZ0r6UXhF4xawGvzaqzCRa1n3/lO3W2w=
github.com/lunny/dingtalk_webhook v0.0.0-20171025031554-e3534c89ef96 h1:uNwtsDp7ci48vBTTxDuwcoTXz4lwtDTe7TjCQ0noaWY= github.com/lunny/dingtalk_webhook v0.0.0-20171025031554-e3534c89ef96 h1:uNwtsDp7ci48vBTTxDuwcoTXz4lwtDTe7TjCQ0noaWY=
github.com/lunny/dingtalk_webhook v0.0.0-20171025031554-e3534c89ef96/go.mod h1:mmIfjCSQlGYXmJ95jFN84AkQFnVABtKuJL8IrzwvUKQ= github.com/lunny/dingtalk_webhook v0.0.0-20171025031554-e3534c89ef96/go.mod h1:mmIfjCSQlGYXmJ95jFN84AkQFnVABtKuJL8IrzwvUKQ=
github.com/lunny/log v0.0.0-20160921050905-7887c61bf0de/go.mod h1:3q8WtuPQsoRbatJuy3nvq/hRSvuBJrHHr+ybPPiNvHQ= github.com/lunny/log v0.0.0-20160921050905-7887c61bf0de/go.mod h1:3q8WtuPQsoRbatJuy3nvq/hRSvuBJrHHr+ybPPiNvHQ=
@@ -834,6 +830,8 @@ github.com/mholt/acmez v0.1.3 h1:J7MmNIk4Qf9b8mAGqAh4XkNeowv3f1zW816yf4zt7Qk=
github.com/mholt/acmez v0.1.3/go.mod h1:8qnn8QA/Ewx8E3ZSsmscqsIjhhpxuy9vqdgbX2ceceM= github.com/mholt/acmez v0.1.3/go.mod h1:8qnn8QA/Ewx8E3ZSsmscqsIjhhpxuy9vqdgbX2ceceM=
github.com/mholt/archiver/v3 v3.5.0 h1:nE8gZIrw66cu4osS/U7UW7YDuGMHssxKutU8IfWxwWE= github.com/mholt/archiver/v3 v3.5.0 h1:nE8gZIrw66cu4osS/U7UW7YDuGMHssxKutU8IfWxwWE=
github.com/mholt/archiver/v3 v3.5.0/go.mod h1:qqTTPUK/HZPFgFQ/TJ3BzvTpF/dPtFVJXdQbCmeMxwc= github.com/mholt/archiver/v3 v3.5.0/go.mod h1:qqTTPUK/HZPFgFQ/TJ3BzvTpF/dPtFVJXdQbCmeMxwc=
github.com/microcosm-cc/bluemonday v1.0.7 h1:6yAQfk4XT+PI/dk1ZeBp1gr3Q2Hd1DR0O3aEyPUJVTE=
github.com/microcosm-cc/bluemonday v1.0.7/go.mod h1:HOT/6NaBlR0f9XlxD3zolN6Z3N8Lp4pvhp+jLS5ihnI=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg= github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.30/go.mod h1:KNUDUusw/aVsxyTYZM1oqvCicbwhgbNgztCETuNZ7xM= github.com/miekg/dns v1.1.30/go.mod h1:KNUDUusw/aVsxyTYZM1oqvCicbwhgbNgztCETuNZ7xM=
github.com/miekg/dns v1.1.40 h1:pyyPFfGMnciYUk/mXpKkVmeMQjfXqt3FAJ2hy7tPiLA= github.com/miekg/dns v1.1.40 h1:pyyPFfGMnciYUk/mXpKkVmeMQjfXqt3FAJ2hy7tPiLA=
@@ -1117,6 +1115,8 @@ github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae h1:ihaXiJkaca54I
github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae/go.mod h1:1fdkY6xxl6ExVs2QFv7R0F5IRZHKA8RahhB9fMC9RvM= github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae/go.mod h1:1fdkY6xxl6ExVs2QFv7R0F5IRZHKA8RahhB9fMC9RvM=
github.com/unrolled/render v1.0.3 h1:baO+NG1bZSF2WR4zwh+0bMWauWky7DVrTOfvE2w+aFo= github.com/unrolled/render v1.0.3 h1:baO+NG1bZSF2WR4zwh+0bMWauWky7DVrTOfvE2w+aFo=
github.com/unrolled/render v1.0.3/go.mod h1:gN9T0NhL4Bfbwu8ann7Ry/TGHYfosul+J0obPf6NBdM= github.com/unrolled/render v1.0.3/go.mod h1:gN9T0NhL4Bfbwu8ann7Ry/TGHYfosul+J0obPf6NBdM=
github.com/unrolled/render v1.1.0 h1:gvpR9hHxTt6DcGqRYuVVFcfd8rtK+nyEPUJN06KB57Q=
github.com/unrolled/render v1.1.0/go.mod h1:gN9T0NhL4Bfbwu8ann7Ry/TGHYfosul+J0obPf6NBdM=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA= github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0= github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli v1.22.5 h1:lNq9sAHXK2qfdI8W+GRItjCEkI+2oR4d+MEHy1CKXoU= github.com/urfave/cli v1.22.5 h1:lNq9sAHXK2qfdI8W+GRItjCEkI+2oR4d+MEHy1CKXoU=
@@ -1145,8 +1145,8 @@ github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.2 h1:YjHC5TgyMmHpicTgEqDN0Q96Xo8K6tLXPnmNOHXCgs0= github.com/yuin/goldmark v1.3.3 h1:37BdQwPx8VOSic8eDSWee6QL9mRpZRm9VJp/QugNrW0=
github.com/yuin/goldmark v1.3.2/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= github.com/yuin/goldmark v1.3.3/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark-highlighting v0.0.0-20200307114337-60d527fdb691 h1:VWSxtAiQNh3zgHJpdpkpVYjTPqRE3P6UZCOPa1nRDio= github.com/yuin/goldmark-highlighting v0.0.0-20200307114337-60d527fdb691 h1:VWSxtAiQNh3zgHJpdpkpVYjTPqRE3P6UZCOPa1nRDio=
github.com/yuin/goldmark-highlighting v0.0.0-20200307114337-60d527fdb691/go.mod h1:YLF3kDffRfUH/bTxOxHhV6lxwIB3Vfj91rEwNMS9MXo= github.com/yuin/goldmark-highlighting v0.0.0-20200307114337-60d527fdb691/go.mod h1:YLF3kDffRfUH/bTxOxHhV6lxwIB3Vfj91rEwNMS9MXo=
github.com/yuin/goldmark-meta v1.0.0 h1:ScsatUIT2gFS6azqzLGUjgOnELsBOxMXerM3ogdJhAM= github.com/yuin/goldmark-meta v1.0.0 h1:ScsatUIT2gFS6azqzLGUjgOnELsBOxMXerM3ogdJhAM=
@@ -1321,8 +1321,9 @@ golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwY
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110 h1:qWPm9rbaAMKs8Bq/9LRpbMqxWRVUAQwMI9fVrssnTfw= golang.org/x/net v0.0.0-20210331212208-0fccb6fa2b5c/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4 h1:4nGaVu0QrbjT/AK2PRLuQfQuh6DJve+pELhqTdAj3x0=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181106182150-f42d05182288/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20181106182150-f42d05182288/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1418,8 +1419,8 @@ golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210228012217-479acdf4ea46 h1:V066+OYJ66oTjnhm4Yrn7SXIwSCiDQJxpBxmvqb1N1c= golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44 h1:Bli41pIlzTzf3KEY06n+xnzK/BESIg2ze4Pgfh/aI8c=
golang.org/x/sys v0.0.0-20210228012217-479acdf4ea46/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 h1:v+OssWQX+hTHEmOBgwxdZxK4zHq3yOs8F9J7mk0PY8E= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 h1:v+OssWQX+hTHEmOBgwxdZxK4zHq3yOs8F9J7mk0PY8E=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=

View File

@@ -239,6 +239,26 @@ func doAPICreatePullRequest(ctx APITestContext, owner, repo, baseBranch, headBra
} }
} }
func doAPIGetPullRequest(ctx APITestContext, owner, repo string, index int64) func(*testing.T) (api.PullRequest, error) {
return func(t *testing.T) (api.PullRequest, error) {
urlStr := fmt.Sprintf("/api/v1/repos/%s/%s/pulls/%d?token=%s",
owner, repo, index, ctx.Token)
req := NewRequest(t, http.MethodGet, urlStr)
expected := 200
if ctx.ExpectedCode != 0 {
expected = ctx.ExpectedCode
}
resp := ctx.Session.MakeRequest(t, req, expected)
json := jsoniter.ConfigCompatibleWithStandardLibrary
decoder := json.NewDecoder(resp.Body)
pr := api.PullRequest{}
err := decoder.Decode(&pr)
return pr, err
}
}
func doAPIMergePullRequest(ctx APITestContext, owner, repo string, index int64) func(*testing.T) { func doAPIMergePullRequest(ctx APITestContext, owner, repo string, index int64) func(*testing.T) {
return func(t *testing.T) { return func(t *testing.T) {
urlStr := fmt.Sprintf("/api/v1/repos/%s/%s/pulls/%d/merge?token=%s", urlStr := fmt.Sprintf("/api/v1/repos/%s/%s/pulls/%d/merge?token=%s",

View File

@@ -92,6 +92,10 @@ func testAPIDeleteOAuth2Application(t *testing.T) {
session.MakeRequest(t, req, http.StatusNoContent) session.MakeRequest(t, req, http.StatusNoContent)
models.AssertNotExistsBean(t, &models.OAuth2Application{UID: oldApp.UID, Name: oldApp.Name}) models.AssertNotExistsBean(t, &models.OAuth2Application{UID: oldApp.UID, Name: oldApp.Name})
// Delete again will return not found
req = NewRequest(t, "DELETE", urlStr)
session.MakeRequest(t, req, http.StatusNotFound)
} }
func testAPIGetOAuth2Application(t *testing.T) { func testAPIGetOAuth2Application(t *testing.T) {

View File

@@ -130,11 +130,14 @@ func getNewRepoEditOption(opts *api.EditRepoOption) *api.EditRepoOption {
func TestAPIRepoEdit(t *testing.T) { func TestAPIRepoEdit(t *testing.T) {
onGiteaRun(t, func(t *testing.T, u *url.URL) { onGiteaRun(t, func(t *testing.T, u *url.URL) {
bFalse, bTrue := false, true
user2 := models.AssertExistsAndLoadBean(t, &models.User{ID: 2}).(*models.User) // owner of the repo1 & repo16 user2 := models.AssertExistsAndLoadBean(t, &models.User{ID: 2}).(*models.User) // owner of the repo1 & repo16
user3 := models.AssertExistsAndLoadBean(t, &models.User{ID: 3}).(*models.User) // owner of the repo3, is an org user3 := models.AssertExistsAndLoadBean(t, &models.User{ID: 3}).(*models.User) // owner of the repo3, is an org
user4 := models.AssertExistsAndLoadBean(t, &models.User{ID: 4}).(*models.User) // owner of neither repos user4 := models.AssertExistsAndLoadBean(t, &models.User{ID: 4}).(*models.User) // owner of neither repos
repo1 := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 1}).(*models.Repository) // public repo repo1 := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 1}).(*models.Repository) // public repo
repo3 := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 3}).(*models.Repository) // public repo repo3 := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 3}).(*models.Repository) // public repo
repo15 := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 15}).(*models.Repository) // empty repo
repo16 := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 16}).(*models.Repository) // private repo repo16 := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 16}).(*models.Repository) // private repo
// Get user2's token // Get user2's token
@@ -286,9 +289,8 @@ func TestAPIRepoEdit(t *testing.T) {
// Test making a repo public that is private // Test making a repo public that is private
repo16 = models.AssertExistsAndLoadBean(t, &models.Repository{ID: 16}).(*models.Repository) repo16 = models.AssertExistsAndLoadBean(t, &models.Repository{ID: 16}).(*models.Repository)
assert.True(t, repo16.IsPrivate) assert.True(t, repo16.IsPrivate)
private := false
repoEditOption = &api.EditRepoOption{ repoEditOption = &api.EditRepoOption{
Private: &private, Private: &bFalse,
} }
url = fmt.Sprintf("/api/v1/repos/%s/%s?token=%s", user2.Name, repo16.Name, token2) url = fmt.Sprintf("/api/v1/repos/%s/%s?token=%s", user2.Name, repo16.Name, token2)
req = NewRequestWithJSON(t, "PATCH", url, &repoEditOption) req = NewRequestWithJSON(t, "PATCH", url, &repoEditOption)
@@ -296,11 +298,24 @@ func TestAPIRepoEdit(t *testing.T) {
repo16 = models.AssertExistsAndLoadBean(t, &models.Repository{ID: 16}).(*models.Repository) repo16 = models.AssertExistsAndLoadBean(t, &models.Repository{ID: 16}).(*models.Repository)
assert.False(t, repo16.IsPrivate) assert.False(t, repo16.IsPrivate)
// Make it private again // Make it private again
private = true repoEditOption.Private = &bTrue
repoEditOption.Private = &private
req = NewRequestWithJSON(t, "PATCH", url, &repoEditOption) req = NewRequestWithJSON(t, "PATCH", url, &repoEditOption)
_ = session.MakeRequest(t, req, http.StatusOK) _ = session.MakeRequest(t, req, http.StatusOK)
// Test to change empty repo
assert.False(t, repo15.IsArchived)
url = fmt.Sprintf("/api/v1/repos/%s/%s?token=%s", user2.Name, repo15.Name, token2)
req = NewRequestWithJSON(t, "PATCH", url, &api.EditRepoOption{
Archived: &bTrue,
})
_ = session.MakeRequest(t, req, http.StatusOK)
repo15 = models.AssertExistsAndLoadBean(t, &models.Repository{ID: 15}).(*models.Repository)
assert.True(t, repo15.IsArchived)
req = NewRequestWithJSON(t, "PATCH", url, &api.EditRepoOption{
Archived: &bFalse,
})
_ = session.MakeRequest(t, req, http.StatusOK)
// Test using org repo "user3/repo3" where user2 is a collaborator // Test using org repo "user3/repo3" where user2 is a collaborator
origRepoEditOption = getRepoEditOptionFromRepo(repo3) origRepoEditOption = getRepoEditOptionFromRepo(repo3)
repoEditOption = getNewRepoEditOption(origRepoEditOption) repoEditOption = getNewRepoEditOption(origRepoEditOption)

View File

@@ -122,7 +122,7 @@ func TestGetAttachment(t *testing.T) {
t.Run(tc.name, func(t *testing.T) { t.Run(tc.name, func(t *testing.T) {
//Write empty file to be available for response //Write empty file to be available for response
if tc.createFile { if tc.createFile {
_, err := storage.Attachments.Save(models.AttachmentRelativePath(tc.uuid), strings.NewReader("hello world")) _, err := storage.Attachments.Save(models.AttachmentRelativePath(tc.uuid), strings.NewReader("hello world"), -1)
assert.NoError(t, err) assert.NoError(t, err)
} }
//Actual test //Actual test

View File

@@ -5,6 +5,7 @@
package integrations package integrations
import ( import (
"encoding/hex"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"math/rand" "math/rand"
@@ -208,13 +209,13 @@ func rawTest(t *testing.T, ctx *APITestContext, little, big, littleLFS, bigLFS s
// Request raw paths // Request raw paths
req := NewRequest(t, "GET", path.Join("/", username, reponame, "/raw/branch/master/", little)) req := NewRequest(t, "GET", path.Join("/", username, reponame, "/raw/branch/master/", little))
resp := session.MakeRequest(t, req, http.StatusOK) resp := session.MakeRequestNilResponseRecorder(t, req, http.StatusOK)
assert.Equal(t, littleSize, resp.Body.Len()) assert.Equal(t, littleSize, resp.Length)
setting.CheckLFSVersion() setting.CheckLFSVersion()
if setting.LFS.StartServer { if setting.LFS.StartServer {
req = NewRequest(t, "GET", path.Join("/", username, reponame, "/raw/branch/master/", littleLFS)) req = NewRequest(t, "GET", path.Join("/", username, reponame, "/raw/branch/master/", littleLFS))
resp = session.MakeRequest(t, req, http.StatusOK) resp := session.MakeRequest(t, req, http.StatusOK)
assert.NotEqual(t, littleSize, resp.Body.Len()) assert.NotEqual(t, littleSize, resp.Body.Len())
assert.LessOrEqual(t, resp.Body.Len(), 1024) assert.LessOrEqual(t, resp.Body.Len(), 1024)
if resp.Body.Len() != littleSize && resp.Body.Len() <= 1024 { if resp.Body.Len() != littleSize && resp.Body.Len() <= 1024 {
@@ -224,12 +225,12 @@ func rawTest(t *testing.T, ctx *APITestContext, little, big, littleLFS, bigLFS s
if !testing.Short() { if !testing.Short() {
req = NewRequest(t, "GET", path.Join("/", username, reponame, "/raw/branch/master/", big)) req = NewRequest(t, "GET", path.Join("/", username, reponame, "/raw/branch/master/", big))
resp = session.MakeRequest(t, req, http.StatusOK) resp := session.MakeRequestNilResponseRecorder(t, req, http.StatusOK)
assert.Equal(t, bigSize, resp.Body.Len()) assert.Equal(t, bigSize, resp.Length)
if setting.LFS.StartServer { if setting.LFS.StartServer {
req = NewRequest(t, "GET", path.Join("/", username, reponame, "/raw/branch/master/", bigLFS)) req = NewRequest(t, "GET", path.Join("/", username, reponame, "/raw/branch/master/", bigLFS))
resp = session.MakeRequest(t, req, http.StatusOK) resp := session.MakeRequest(t, req, http.StatusOK)
assert.NotEqual(t, bigSize, resp.Body.Len()) assert.NotEqual(t, bigSize, resp.Body.Len())
if resp.Body.Len() != bigSize && resp.Body.Len() <= 1024 { if resp.Body.Len() != bigSize && resp.Body.Len() <= 1024 {
assert.Contains(t, resp.Body.String(), models.LFSMetaFileIdentifier) assert.Contains(t, resp.Body.String(), models.LFSMetaFileIdentifier)
@@ -450,27 +451,35 @@ func doMergeFork(ctx, baseCtx APITestContext, baseBranch, headBranch string) fun
t.Run("EnsureCanSeePull", doEnsureCanSeePull(baseCtx, pr)) t.Run("EnsureCanSeePull", doEnsureCanSeePull(baseCtx, pr))
// Then get the diff string // Then get the diff string
var diffStr string var diffHash string
var diffLength int
t.Run("GetDiff", func(t *testing.T) { t.Run("GetDiff", func(t *testing.T) {
req := NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d.diff", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index)) req := NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d.diff", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
resp := ctx.Session.MakeRequest(t, req, http.StatusOK) resp := ctx.Session.MakeRequestNilResponseHashSumRecorder(t, req, http.StatusOK)
diffStr = resp.Body.String() diffHash = string(resp.Hash.Sum(nil))
diffLength = resp.Length
}) })
// Now: Merge the PR & make sure that doesn't break the PR page or change its diff // Now: Merge the PR & make sure that doesn't break the PR page or change its diff
t.Run("MergePR", doAPIMergePullRequest(baseCtx, baseCtx.Username, baseCtx.Reponame, pr.Index)) t.Run("MergePR", doAPIMergePullRequest(baseCtx, baseCtx.Username, baseCtx.Reponame, pr.Index))
t.Run("EnsureCanSeePull", doEnsureCanSeePull(baseCtx, pr)) t.Run("EnsureCanSeePull", doEnsureCanSeePull(baseCtx, pr))
t.Run("EnsureDiffNoChange", doEnsureDiffNoChange(baseCtx, pr, diffStr)) t.Run("CheckPR", func(t *testing.T) {
oldMergeBase := pr.MergeBase
pr2, err := doAPIGetPullRequest(baseCtx, baseCtx.Username, baseCtx.Reponame, pr.Index)(t)
assert.NoError(t, err)
assert.Equal(t, oldMergeBase, pr2.MergeBase)
})
t.Run("EnsurDiffNoChange", doEnsureDiffNoChange(baseCtx, pr, diffHash, diffLength))
// Then: Delete the head branch & make sure that doesn't break the PR page or change its diff // Then: Delete the head branch & make sure that doesn't break the PR page or change its diff
t.Run("DeleteHeadBranch", doBranchDelete(baseCtx, baseCtx.Username, baseCtx.Reponame, headBranch)) t.Run("DeleteHeadBranch", doBranchDelete(baseCtx, baseCtx.Username, baseCtx.Reponame, headBranch))
t.Run("EnsureCanSeePull", doEnsureCanSeePull(baseCtx, pr)) t.Run("EnsureCanSeePull", doEnsureCanSeePull(baseCtx, pr))
t.Run("EnsureDiffNoChange", doEnsureDiffNoChange(baseCtx, pr, diffStr)) t.Run("EnsureDiffNoChange", doEnsureDiffNoChange(baseCtx, pr, diffHash, diffLength))
// Delete the head repository & make sure that doesn't break the PR page or change its diff // Delete the head repository & make sure that doesn't break the PR page or change its diff
t.Run("DeleteHeadRepository", doAPIDeleteRepository(ctx)) t.Run("DeleteHeadRepository", doAPIDeleteRepository(ctx))
t.Run("EnsureCanSeePull", doEnsureCanSeePull(baseCtx, pr)) t.Run("EnsureCanSeePull", doEnsureCanSeePull(baseCtx, pr))
t.Run("EnsureDiffNoChange", doEnsureDiffNoChange(baseCtx, pr, diffStr)) t.Run("EnsureDiffNoChange", doEnsureDiffNoChange(baseCtx, pr, diffHash, diffLength))
} }
} }
@@ -514,20 +523,15 @@ func doEnsureCanSeePull(ctx APITestContext, pr api.PullRequest) func(t *testing.
} }
} }
func doEnsureDiffNoChange(ctx APITestContext, pr api.PullRequest, diffStr string) func(t *testing.T) { func doEnsureDiffNoChange(ctx APITestContext, pr api.PullRequest, diffHash string, diffLength int) func(t *testing.T) {
return func(t *testing.T) { return func(t *testing.T) {
req := NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d.diff", url.PathEscape(ctx.Username), url.PathEscape(ctx.Reponame), pr.Index)) req := NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d.diff", url.PathEscape(ctx.Username), url.PathEscape(ctx.Reponame), pr.Index))
resp := ctx.Session.MakeRequest(t, req, http.StatusOK) resp := ctx.Session.MakeRequestNilResponseHashSumRecorder(t, req, http.StatusOK)
expectedMaxLen := len(diffStr) actual := string(resp.Hash.Sum(nil))
if expectedMaxLen > 800 { actualLength := resp.Length
expectedMaxLen = 800
} equal := diffHash == actual
actual := resp.Body.String() assert.True(t, equal, "Unexpected change in the diff string: expected hash: %s size: %d but was actually: %s size: %d", hex.EncodeToString([]byte(diffHash)), diffLength, hex.EncodeToString([]byte(actual)), actualLength)
actualMaxLen := len(actual)
if actualMaxLen > 800 {
actualMaxLen = 800
}
assert.Equal(t, diffStr, actual, "Unexpected change in the diff string: expected: %s but was actually: %s", diffStr[:expectedMaxLen], actual[:actualMaxLen])
} }
} }

View File

@@ -9,6 +9,8 @@ import (
"context" "context"
"database/sql" "database/sql"
"fmt" "fmt"
"hash"
"hash/fnv"
"io" "io"
"net/http" "net/http"
"net/http/cookiejar" "net/http/cookiejar"
@@ -58,6 +60,26 @@ func NewNilResponseRecorder() *NilResponseRecorder {
} }
} }
type NilResponseHashSumRecorder struct {
httptest.ResponseRecorder
Hash hash.Hash
Length int
}
func (n *NilResponseHashSumRecorder) Write(b []byte) (int, error) {
_, _ = n.Hash.Write(b)
n.Length += len(b)
return len(b), nil
}
// NewRecorder returns an initialized ResponseRecorder.
func NewNilResponseHashSumRecorder() *NilResponseHashSumRecorder {
return &NilResponseHashSumRecorder{
Hash: fnv.New32(),
ResponseRecorder: *httptest.NewRecorder(),
}
}
func TestMain(m *testing.M) { func TestMain(m *testing.M) {
defer log.Close() defer log.Close()
@@ -284,6 +306,23 @@ func (s *TestSession) MakeRequestNilResponseRecorder(t testing.TB, req *http.Req
return resp return resp
} }
func (s *TestSession) MakeRequestNilResponseHashSumRecorder(t testing.TB, req *http.Request, expectedStatus int) *NilResponseHashSumRecorder {
t.Helper()
baseURL, err := url.Parse(setting.AppURL)
assert.NoError(t, err)
for _, c := range s.jar.Cookies(baseURL) {
req.AddCookie(c)
}
resp := MakeRequestNilResponseHashSumRecorder(t, req, expectedStatus)
ch := http.Header{}
ch.Add("Cookie", strings.Join(resp.Header()["Set-Cookie"], ";"))
cr := http.Request{Header: ch}
s.jar.SetCookies(baseURL, cr.Cookies())
return resp
}
const userPassword = "password" const userPassword = "password"
var loginSessionCache = make(map[string]*TestSession, 10) var loginSessionCache = make(map[string]*TestSession, 10)
@@ -429,6 +468,19 @@ func MakeRequestNilResponseRecorder(t testing.TB, req *http.Request, expectedSta
return recorder return recorder
} }
func MakeRequestNilResponseHashSumRecorder(t testing.TB, req *http.Request, expectedStatus int) *NilResponseHashSumRecorder {
t.Helper()
recorder := NewNilResponseHashSumRecorder()
c.ServeHTTP(recorder, req)
if expectedStatus != NoExpectedStatus {
if !assert.EqualValues(t, expectedStatus, recorder.Code,
"Request: %s %s", req.Method, req.URL.String()) {
logUnexpectedResponse(t, &recorder.ResponseRecorder)
}
}
return recorder
}
// logUnexpectedResponse logs the contents of an unexpected response. // logUnexpectedResponse logs the contents of an unexpected response.
func logUnexpectedResponse(t testing.TB, recorder *httptest.ResponseRecorder) { func logUnexpectedResponse(t testing.TB, recorder *httptest.ResponseRecorder) {
t.Helper() t.Helper()

View File

@@ -10,9 +10,11 @@ import (
"testing" "testing"
"time" "time"
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/test" "code.gitea.io/gitea/modules/test"
"github.com/PuerkitoBio/goquery"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/unknwon/i18n" "github.com/unknwon/i18n"
) )
@@ -83,7 +85,7 @@ func TestCreateRelease(t *testing.T) {
session := loginUser(t, "user2") session := loginUser(t, "user2")
createNewRelease(t, session, "/user2/repo1", "v0.0.1", "v0.0.1", false, false) createNewRelease(t, session, "/user2/repo1", "v0.0.1", "v0.0.1", false, false)
checkLatestReleaseAndCount(t, session, "/user2/repo1", "v0.0.1", i18n.Tr("en", "repo.release.stable"), 2) checkLatestReleaseAndCount(t, session, "/user2/repo1", "v0.0.1", i18n.Tr("en", "repo.release.stable"), 3)
} }
func TestCreateReleasePreRelease(t *testing.T) { func TestCreateReleasePreRelease(t *testing.T) {
@@ -92,7 +94,7 @@ func TestCreateReleasePreRelease(t *testing.T) {
session := loginUser(t, "user2") session := loginUser(t, "user2")
createNewRelease(t, session, "/user2/repo1", "v0.0.1", "v0.0.1", true, false) createNewRelease(t, session, "/user2/repo1", "v0.0.1", "v0.0.1", true, false)
checkLatestReleaseAndCount(t, session, "/user2/repo1", "v0.0.1", i18n.Tr("en", "repo.release.prerelease"), 2) checkLatestReleaseAndCount(t, session, "/user2/repo1", "v0.0.1", i18n.Tr("en", "repo.release.prerelease"), 3)
} }
func TestCreateReleaseDraft(t *testing.T) { func TestCreateReleaseDraft(t *testing.T) {
@@ -101,7 +103,7 @@ func TestCreateReleaseDraft(t *testing.T) {
session := loginUser(t, "user2") session := loginUser(t, "user2")
createNewRelease(t, session, "/user2/repo1", "v0.0.1", "v0.0.1", false, true) createNewRelease(t, session, "/user2/repo1", "v0.0.1", "v0.0.1", false, true)
checkLatestReleaseAndCount(t, session, "/user2/repo1", "v0.0.1", i18n.Tr("en", "repo.release.draft"), 2) checkLatestReleaseAndCount(t, session, "/user2/repo1", "v0.0.1", i18n.Tr("en", "repo.release.draft"), 3)
} }
func TestCreateReleasePaging(t *testing.T) { func TestCreateReleasePaging(t *testing.T) {
@@ -127,3 +129,80 @@ func TestCreateReleasePaging(t *testing.T) {
session2 := loginUser(t, "user4") session2 := loginUser(t, "user4")
checkLatestReleaseAndCount(t, session2, "/user2/repo1", "v0.0.11", i18n.Tr("en", "repo.release.stable"), 10) checkLatestReleaseAndCount(t, session2, "/user2/repo1", "v0.0.11", i18n.Tr("en", "repo.release.stable"), 10)
} }
func TestViewReleaseListNoLogin(t *testing.T) {
defer prepareTestEnv(t)()
repo := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 1}).(*models.Repository)
link := repo.Link() + "/releases"
req := NewRequest(t, "GET", link)
rsp := MakeRequest(t, req, http.StatusOK)
htmlDoc := NewHTMLParser(t, rsp.Body)
releases := htmlDoc.Find("#release-list li.ui.grid")
assert.Equal(t, 1, releases.Length())
links := make([]string, 0, 5)
releases.Each(func(i int, s *goquery.Selection) {
link, exist := s.Find(".release-list-title a").Attr("href")
if !exist {
return
}
links = append(links, link)
})
assert.EqualValues(t, []string{"/user2/repo1/releases/tag/v1.1"}, links)
}
func TestViewReleaseListLogin(t *testing.T) {
defer prepareTestEnv(t)()
repo := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 1}).(*models.Repository)
link := repo.Link() + "/releases"
session := loginUser(t, "user1")
req := NewRequest(t, "GET", link)
rsp := session.MakeRequest(t, req, http.StatusOK)
htmlDoc := NewHTMLParser(t, rsp.Body)
releases := htmlDoc.Find("#release-list li.ui.grid")
assert.Equal(t, 2, releases.Length())
links := make([]string, 0, 5)
releases.Each(func(i int, s *goquery.Selection) {
link, exist := s.Find(".release-list-title a").Attr("href")
if !exist {
return
}
links = append(links, link)
})
assert.EqualValues(t, []string{"/user2/repo1/releases/tag/draft-release",
"/user2/repo1/releases/tag/v1.1"}, links)
}
func TestViewTagsList(t *testing.T) {
defer prepareTestEnv(t)()
repo := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 1}).(*models.Repository)
link := repo.Link() + "/tags"
session := loginUser(t, "user1")
req := NewRequest(t, "GET", link)
rsp := session.MakeRequest(t, req, http.StatusOK)
htmlDoc := NewHTMLParser(t, rsp.Body)
tags := htmlDoc.Find(".tag-list tr")
assert.Equal(t, 2, tags.Length())
tagNames := make([]string, 0, 5)
tags.Each(func(i int, s *goquery.Selection) {
tagNames = append(tagNames, s.Find(".tag a.df.ac").Text())
})
assert.EqualValues(t, []string{"delete-tag", "v1.1"}, tagNames)
}

View File

@@ -382,7 +382,7 @@ func activityQueryCondition(opts GetFeedsOptions) (builder.Cond, error) {
} }
if opts.Date != "" { if opts.Date != "" {
dateLow, err := time.Parse("2006-01-02", opts.Date) dateLow, err := time.ParseInLocation("2006-01-02", opts.Date, setting.DefaultUILocation)
if err != nil { if err != nil {
log.Warn("Unable to parse %s, filter not applied: %v", opts.Date, err) log.Warn("Unable to parse %s, filter not applied: %v", opts.Date, err)
} else { } else {

View File

@@ -85,7 +85,7 @@ func (a *Attachment) LinkedRepository() (*Repository, UnitType, error) {
func NewAttachment(attach *Attachment, buf []byte, file io.Reader) (_ *Attachment, err error) { func NewAttachment(attach *Attachment, buf []byte, file io.Reader) (_ *Attachment, err error) {
attach.UUID = gouuid.New().String() attach.UUID = gouuid.New().String()
size, err := storage.Attachments.Save(attach.RelativePath(), io.MultiReader(bytes.NewReader(buf), file)) size, err := storage.Attachments.Save(attach.RelativePath(), io.MultiReader(bytes.NewReader(buf), file), -1)
if err != nil { if err != nil {
return nil, fmt.Errorf("Create: %v", err) return nil, fmt.Errorf("Create: %v", err)
} }
@@ -125,8 +125,8 @@ func getAttachmentByUUID(e Engine, uuid string) (*Attachment, error) {
} }
// GetAttachmentsByUUIDs returns attachment by given UUID list. // GetAttachmentsByUUIDs returns attachment by given UUID list.
func GetAttachmentsByUUIDs(uuids []string) ([]*Attachment, error) { func GetAttachmentsByUUIDs(ctx DBContext, uuids []string) ([]*Attachment, error) {
return getAttachmentsByUUIDs(x, uuids) return getAttachmentsByUUIDs(ctx.e, uuids)
} }
func getAttachmentsByUUIDs(e Engine, uuids []string) ([]*Attachment, error) { func getAttachmentsByUUIDs(e Engine, uuids []string) ([]*Attachment, error) {
@@ -183,12 +183,12 @@ func getAttachmentByReleaseIDFileName(e Engine, releaseID int64, fileName string
// DeleteAttachment deletes the given attachment and optionally the associated file. // DeleteAttachment deletes the given attachment and optionally the associated file.
func DeleteAttachment(a *Attachment, remove bool) error { func DeleteAttachment(a *Attachment, remove bool) error {
_, err := DeleteAttachments([]*Attachment{a}, remove) _, err := DeleteAttachments(DefaultDBContext(), []*Attachment{a}, remove)
return err return err
} }
// DeleteAttachments deletes the given attachments and optionally the associated files. // DeleteAttachments deletes the given attachments and optionally the associated files.
func DeleteAttachments(attachments []*Attachment, remove bool) (int, error) { func DeleteAttachments(ctx DBContext, attachments []*Attachment, remove bool) (int, error) {
if len(attachments) == 0 { if len(attachments) == 0 {
return 0, nil return 0, nil
} }
@@ -198,7 +198,7 @@ func DeleteAttachments(attachments []*Attachment, remove bool) (int, error) {
ids = append(ids, a.ID) ids = append(ids, a.ID)
} }
cnt, err := x.In("id", ids).NoAutoCondition().Delete(attachments[0]) cnt, err := ctx.e.In("id", ids).NoAutoCondition().Delete(attachments[0])
if err != nil { if err != nil {
return 0, err return 0, err
} }
@@ -220,7 +220,7 @@ func DeleteAttachmentsByIssue(issueID int64, remove bool) (int, error) {
return 0, err return 0, err
} }
return DeleteAttachments(attachments, remove) return DeleteAttachments(DefaultDBContext(), attachments, remove)
} }
// DeleteAttachmentsByComment deletes all attachments associated with the given comment. // DeleteAttachmentsByComment deletes all attachments associated with the given comment.
@@ -230,7 +230,7 @@ func DeleteAttachmentsByComment(commentID int64, remove bool) (int, error) {
return 0, err return 0, err
} }
return DeleteAttachments(attachments, remove) return DeleteAttachments(DefaultDBContext(), attachments, remove)
} }
// UpdateAttachment updates the given attachment in database // UpdateAttachment updates the given attachment in database
@@ -238,6 +238,15 @@ func UpdateAttachment(atta *Attachment) error {
return updateAttachment(x, atta) return updateAttachment(x, atta)
} }
// UpdateAttachmentByUUID Updates attachment via uuid
func UpdateAttachmentByUUID(ctx DBContext, attach *Attachment, cols ...string) error {
if attach.UUID == "" {
return fmt.Errorf("Attachement uuid should not blank")
}
_, err := ctx.e.Where("uuid=?", attach.UUID).Cols(cols...).Update(attach)
return err
}
func updateAttachment(e Engine, atta *Attachment) error { func updateAttachment(e Engine, atta *Attachment) error {
var sess *xorm.Session var sess *xorm.Session
if atta.ID != 0 && atta.UUID == "" { if atta.ID != 0 && atta.UUID == "" {

View File

@@ -120,7 +120,7 @@ func TestUpdateAttachment(t *testing.T) {
func TestGetAttachmentsByUUIDs(t *testing.T) { func TestGetAttachmentsByUUIDs(t *testing.T) {
assert.NoError(t, PrepareTestDatabase()) assert.NoError(t, PrepareTestDatabase())
attachList, err := GetAttachmentsByUUIDs([]string{"a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11", "a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a17", "not-existing-uuid"}) attachList, err := GetAttachmentsByUUIDs(DefaultDBContext(), []string{"a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11", "a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a17", "not-existing-uuid"})
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, 2, len(attachList)) assert.Equal(t, 2, len(attachList))
assert.Equal(t, "a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11", attachList[0].UUID) assert.Equal(t, "a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11", attachList[0].UUID)

View File

@@ -81,7 +81,7 @@ func LibravatarURL(email string) (*url.URL, error) {
} }
// HashedAvatarLink returns an avatar link for a provided email // HashedAvatarLink returns an avatar link for a provided email
func HashedAvatarLink(email string) string { func HashedAvatarLink(email string, size int) string {
lowerEmail := strings.ToLower(strings.TrimSpace(email)) lowerEmail := strings.ToLower(strings.TrimSpace(email))
sum := fmt.Sprintf("%x", md5.Sum([]byte(lowerEmail))) sum := fmt.Sprintf("%x", md5.Sum([]byte(lowerEmail)))
_, _ = cache.GetString("Avatar:"+sum, func() (string, error) { _, _ = cache.GetString("Avatar:"+sum, func() (string, error) {
@@ -96,6 +96,11 @@ func HashedAvatarLink(email string) string {
// we don't care about any DB problem just return the lowerEmail // we don't care about any DB problem just return the lowerEmail
return lowerEmail, nil return lowerEmail, nil
} }
has, err := sess.Where("email = ? AND hash = ?", emailHash.Email, emailHash.Hash).Get(new(EmailHash))
if has || err != nil {
// Seriously we don't care about any DB problems just return the lowerEmail - we expect the transaction to fail most of the time
return lowerEmail, nil
}
_, _ = sess.Insert(emailHash) _, _ = sess.Insert(emailHash)
if err := sess.Commit(); err != nil { if err := sess.Commit(); err != nil {
// Seriously we don't care about any DB problems just return the lowerEmail - we expect the transaction to fail most of the time // Seriously we don't care about any DB problems just return the lowerEmail - we expect the transaction to fail most of the time
@@ -103,6 +108,9 @@ func HashedAvatarLink(email string) string {
} }
return lowerEmail, nil return lowerEmail, nil
}) })
if size > 0 {
return setting.AppSubURL + "/avatar/" + url.PathEscape(sum) + "?size=" + strconv.Itoa(size)
}
return setting.AppSubURL + "/avatar/" + url.PathEscape(sum) return setting.AppSubURL + "/avatar/" + url.PathEscape(sum)
} }
@@ -124,7 +132,7 @@ func SizedAvatarLink(email string, size int) string {
// This is the slow path that would need to call LibravatarURL() which // This is the slow path that would need to call LibravatarURL() which
// does DNS lookups. Avoid it by issuing a redirect so we don't block // does DNS lookups. Avoid it by issuing a redirect so we don't block
// the template render with network requests. // the template render with network requests.
return HashedAvatarLink(email) return HashedAvatarLink(email, size)
} else if !setting.DisableGravatar { } else if !setting.DisableGravatar {
// copy GravatarSourceURL, because we will modify its Path. // copy GravatarSourceURL, because we will modify its Path.
copyOfGravatarSourceURL := *setting.GravatarSourceURL copyOfGravatarSourceURL := *setting.GravatarSourceURL

View File

@@ -296,11 +296,15 @@ func CountOrphanedObjects(subject, refobject, joinCond string) (int64, error) {
// DeleteOrphanedObjects delete subjects with have no existing refobject anymore // DeleteOrphanedObjects delete subjects with have no existing refobject anymore
func DeleteOrphanedObjects(subject, refobject, joinCond string) error { func DeleteOrphanedObjects(subject, refobject, joinCond string) error {
_, err := x.In("id", builder.Select("`"+subject+"`.id"). subQuery := builder.Select("`"+subject+"`.id").
From("`"+subject+"`"). From("`"+subject+"`").
Join("LEFT", "`"+refobject+"`", joinCond). Join("LEFT", "`"+refobject+"`", joinCond).
Where(builder.IsNull{"`" + refobject + "`.id"})). Where(builder.IsNull{"`" + refobject + "`.id"})
Delete("`" + subject + "`") sql, args, err := builder.Delete(builder.In("id", subQuery)).From("`" + subject + "`").ToSQL()
if err != nil {
return err
}
_, err = x.Exec(append([]interface{}{sql}, args...)...)
return err return err
} }
@@ -338,7 +342,7 @@ func FixCommentTypeLabelWithEmptyLabel() (int64, error) {
// CountCommentTypeLabelWithOutsideLabels count label comments with outside label // CountCommentTypeLabelWithOutsideLabels count label comments with outside label
func CountCommentTypeLabelWithOutsideLabels() (int64, error) { func CountCommentTypeLabelWithOutsideLabels() (int64, error) {
return x.Where("comment.type = ? AND (issue.repo_id != label.repo_id OR (label.repo_id = 0 AND repository.owner_id != label.org_id))", CommentTypeLabel). return x.Where("comment.type = ? AND ((label.org_id = 0 AND issue.repo_id != label.repo_id) OR (label.repo_id = 0 AND label.org_id != repository.owner_id))", CommentTypeLabel).
Table("comment"). Table("comment").
Join("inner", "label", "label.id = comment.label_id"). Join("inner", "label", "label.id = comment.label_id").
Join("inner", "issue", "issue.id = comment.issue_id "). Join("inner", "issue", "issue.id = comment.issue_id ").
@@ -354,8 +358,9 @@ func FixCommentTypeLabelWithOutsideLabels() (int64, error) {
FROM comment AS com FROM comment AS com
INNER JOIN label ON com.label_id = label.id INNER JOIN label ON com.label_id = label.id
INNER JOIN issue on issue.id = com.issue_id INNER JOIN issue on issue.id = com.issue_id
INNER JOIN repository ON issue.repo_id = repository.id
WHERE WHERE
com.type = ? AND (issue.repo_id != label.repo_id OR (label.repo_id = 0 AND label.org_id != repo.owner_id)) com.type = ? AND ((label.org_id = 0 AND issue.repo_id != label.repo_id) OR (label.repo_id = 0 AND label.org_id != repository.owner_id))
) AS il_too)`, CommentTypeLabel) ) AS il_too)`, CommentTypeLabel)
if err != nil { if err != nil {
return 0, err return 0, err
@@ -366,9 +371,9 @@ func FixCommentTypeLabelWithOutsideLabels() (int64, error) {
// CountIssueLabelWithOutsideLabels count label comments with outside label // CountIssueLabelWithOutsideLabels count label comments with outside label
func CountIssueLabelWithOutsideLabels() (int64, error) { func CountIssueLabelWithOutsideLabels() (int64, error) {
return x.Where(builder.Expr("issue.repo_id != label.repo_id OR (label.repo_id = 0 AND repository.owner_id != label.org_id)")). return x.Where(builder.Expr("(label.org_id = 0 AND issue.repo_id != label.repo_id) OR (label.repo_id = 0 AND label.org_id != repository.owner_id)")).
Table("issue_label"). Table("issue_label").
Join("inner", "label", "issue_label.id = label.id "). Join("inner", "label", "issue_label.label_id = label.id ").
Join("inner", "issue", "issue.id = issue_label.issue_id "). Join("inner", "issue", "issue.id = issue_label.issue_id ").
Join("inner", "repository", "issue.repo_id = repository.id"). Join("inner", "repository", "issue.repo_id = repository.id").
Count(new(IssueLabel)) Count(new(IssueLabel))
@@ -380,11 +385,11 @@ func FixIssueLabelWithOutsideLabels() (int64, error) {
SELECT il_too.id FROM ( SELECT il_too.id FROM (
SELECT il_too_too.id SELECT il_too_too.id
FROM issue_label AS il_too_too FROM issue_label AS il_too_too
INNER JOIN label ON il_too_too.id = label.id INNER JOIN label ON il_too_too.label_id = label.id
INNER JOIN issue on issue.id = il_too_too.issue_id INNER JOIN issue on issue.id = il_too_too.issue_id
INNER JOIN repository on repository.id = issue.repo_id INNER JOIN repository on repository.id = issue.repo_id
WHERE WHERE
issue.repo_id != label.repo_id OR (label.repo_id = 0 AND label.org_id != repository.owner_id) (label.org_id = 0 AND issue.repo_id != label.repo_id) OR (label.repo_id = 0 AND label.org_id != repository.owner_id)
) AS il_too )`) ) AS il_too )`)
if err != nil { if err != nil {

View File

@@ -0,0 +1,32 @@
// Copyright 2021 Gitea. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package models
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestDeleteOrphanedObjects(t *testing.T) {
assert.NoError(t, PrepareTestDatabase())
countBefore, err := x.Count(&PullRequest{})
assert.NoError(t, err)
_, err = x.Insert(&PullRequest{IssueID: 1000}, &PullRequest{IssueID: 1001}, &PullRequest{IssueID: 1003})
assert.NoError(t, err)
orphaned, err := CountOrphanedObjects("pull_request", "issue", "pull_request.issue_id=issue.id")
assert.NoError(t, err)
assert.EqualValues(t, 3, orphaned)
err = DeleteOrphanedObjects("pull_request", "issue", "pull_request.issue_id=issue.id")
assert.NoError(t, err)
countAfter, err := x.Count(&PullRequest{})
assert.NoError(t, err)
assert.EqualValues(t, countBefore, countAfter)
}

View File

@@ -43,3 +43,15 @@
is_tag: true is_tag: true
created_unix: 946684800 created_unix: 946684800
-
id: 4
repo_id: 1
publisher_id: 2
tag_name: "draft-release"
lower_tag_name: "draft-release"
target: "master"
title: "draft-release"
is_draft: true
is_prerelease: false
is_tag: false
created_unix: 1619524806

View File

@@ -53,6 +53,9 @@ func (issues IssueList) loadRepositories(e Engine) ([]*Repository, error) {
for _, issue := range issues { for _, issue := range issues {
issue.Repo = repoMaps[issue.RepoID] issue.Repo = repoMaps[issue.RepoID]
if issue.PullRequest != nil {
issue.PullRequest.BaseRepo = issue.Repo
}
} }
return valuesRepository(repoMaps), nil return valuesRepository(repoMaps), nil
} }
@@ -516,6 +519,11 @@ func (issues IssueList) LoadDiscussComments() error {
return issues.loadComments(x, builder.Eq{"comment.type": CommentTypeComment}) return issues.loadComments(x, builder.Eq{"comment.type": CommentTypeComment})
} }
// LoadPullRequests loads pull requests
func (issues IssueList) LoadPullRequests() error {
return issues.loadPullRequests(x)
}
// GetApprovalCounts returns a map of issue ID to slice of approval counts // GetApprovalCounts returns a map of issue ID to slice of approval counts
// FIXME: only returns official counts due to double counting of non-official approvals // FIXME: only returns official counts due to double counting of non-official approvals
func (issues IssueList) GetApprovalCounts() (map[int64][]*ReviewCount, error) { func (issues IssueList) GetApprovalCounts() (map[int64][]*ReviewCount, error) {

View File

@@ -26,6 +26,8 @@ func NewXORMLogger(showSQL bool) xormlog.Logger {
} }
} }
const stackLevel = 8
// Log a message with defined skip and at logging level // Log a message with defined skip and at logging level
func (l *XORMLogBridge) Log(skip int, level log.Level, format string, v ...interface{}) error { func (l *XORMLogBridge) Log(skip int, level log.Level, format string, v ...interface{}) error {
return l.logger.Log(skip+1, level, format, v...) return l.logger.Log(skip+1, level, format, v...)
@@ -33,42 +35,42 @@ func (l *XORMLogBridge) Log(skip int, level log.Level, format string, v ...inter
// Debug show debug log // Debug show debug log
func (l *XORMLogBridge) Debug(v ...interface{}) { func (l *XORMLogBridge) Debug(v ...interface{}) {
_ = l.Log(2, log.DEBUG, fmt.Sprint(v...)) _ = l.Log(stackLevel, log.DEBUG, fmt.Sprint(v...))
} }
// Debugf show debug log // Debugf show debug log
func (l *XORMLogBridge) Debugf(format string, v ...interface{}) { func (l *XORMLogBridge) Debugf(format string, v ...interface{}) {
_ = l.Log(2, log.DEBUG, format, v...) _ = l.Log(stackLevel, log.DEBUG, format, v...)
} }
// Error show error log // Error show error log
func (l *XORMLogBridge) Error(v ...interface{}) { func (l *XORMLogBridge) Error(v ...interface{}) {
_ = l.Log(2, log.ERROR, fmt.Sprint(v...)) _ = l.Log(stackLevel, log.ERROR, fmt.Sprint(v...))
} }
// Errorf show error log // Errorf show error log
func (l *XORMLogBridge) Errorf(format string, v ...interface{}) { func (l *XORMLogBridge) Errorf(format string, v ...interface{}) {
_ = l.Log(2, log.ERROR, format, v...) _ = l.Log(stackLevel, log.ERROR, format, v...)
} }
// Info show information level log // Info show information level log
func (l *XORMLogBridge) Info(v ...interface{}) { func (l *XORMLogBridge) Info(v ...interface{}) {
_ = l.Log(2, log.INFO, fmt.Sprint(v...)) _ = l.Log(stackLevel, log.INFO, fmt.Sprint(v...))
} }
// Infof show information level log // Infof show information level log
func (l *XORMLogBridge) Infof(format string, v ...interface{}) { func (l *XORMLogBridge) Infof(format string, v ...interface{}) {
_ = l.Log(2, log.INFO, format, v...) _ = l.Log(stackLevel, log.INFO, format, v...)
} }
// Warn show warning log // Warn show warning log
func (l *XORMLogBridge) Warn(v ...interface{}) { func (l *XORMLogBridge) Warn(v ...interface{}) {
_ = l.Log(2, log.WARN, fmt.Sprint(v...)) _ = l.Log(stackLevel, log.WARN, fmt.Sprint(v...))
} }
// Warnf show warnning log // Warnf show warnning log
func (l *XORMLogBridge) Warnf(format string, v ...interface{}) { func (l *XORMLogBridge) Warnf(format string, v ...interface{}) {
_ = l.Log(2, log.WARN, format, v...) _ = l.Log(stackLevel, log.WARN, format, v...)
} }
// Level get logger level // Level get logger level

View File

@@ -39,6 +39,7 @@ func InsertMilestones(ms ...*Milestone) (err error) {
// InsertIssues insert issues to database // InsertIssues insert issues to database
func InsertIssues(issues ...*Issue) error { func InsertIssues(issues ...*Issue) error {
sess := x.NewSession() sess := x.NewSession()
defer sess.Close()
if err := sess.Begin(); err != nil { if err := sess.Begin(); err != nil {
return err return err
} }
@@ -194,6 +195,7 @@ func InsertPullRequests(prs ...*PullRequest) error {
// InsertReleases migrates release // InsertReleases migrates release
func InsertReleases(rels ...*Release) error { func InsertReleases(rels ...*Release) error {
sess := x.NewSession() sess := x.NewSession()
defer sess.Close()
if err := sess.Begin(); err != nil { if err := sess.Begin(); err != nil {
return err return err
} }

View File

@@ -88,6 +88,7 @@ func fixPublisherIDforTagReleases(x *xorm.Engine) error {
repo = new(Repository) repo = new(Repository)
has, err := sess.ID(release.RepoID).Get(repo) has, err := sess.ID(release.RepoID).Get(repo)
if err != nil { if err != nil {
log.Error("Error whilst loading repository[%d] for release[%d] with tag name %s", release.RepoID, release.ID, release.TagName)
return err return err
} else if !has { } else if !has {
log.Warn("Release[%d] is orphaned and refers to non-existing repository %d", release.ID, release.RepoID) log.Warn("Release[%d] is orphaned and refers to non-existing repository %d", release.ID, release.RepoID)
@@ -99,21 +100,29 @@ func fixPublisherIDforTagReleases(x *xorm.Engine) error {
// v120.go migration may not have been run correctly - we'll just replicate it here // v120.go migration may not have been run correctly - we'll just replicate it here
// because this appears to be a common-ish problem. // because this appears to be a common-ish problem.
if _, err := sess.Exec("UPDATE repository SET owner_name = (SELECT name FROM `user` WHERE `user`.id = repository.owner_id)"); err != nil { if _, err := sess.Exec("UPDATE repository SET owner_name = (SELECT name FROM `user` WHERE `user`.id = repository.owner_id)"); err != nil {
log.Error("Error whilst updating repository[%d] owner name", repo.ID)
return err return err
} }
if _, err := sess.ID(release.RepoID).Get(repo); err != nil { if _, err := sess.ID(release.RepoID).Get(repo); err != nil {
log.Error("Error whilst loading repository[%d] for release[%d] with tag name %s", release.RepoID, release.ID, release.TagName)
return err return err
} }
} }
gitRepo, err = git.OpenRepository(repoPath(repo.OwnerName, repo.Name)) gitRepo, err = git.OpenRepository(repoPath(repo.OwnerName, repo.Name))
if err != nil { if err != nil {
log.Error("Error whilst opening git repo for %-v", repo)
return err return err
} }
} }
commit, err := gitRepo.GetTagCommit(release.TagName) commit, err := gitRepo.GetTagCommit(release.TagName)
if err != nil { if err != nil {
if git.IsErrNotExist(err) {
log.Warn("Unable to find commit %s for Tag: %s in %-v. Cannot update publisher ID.", err.(git.ErrNotExist).ID, release.TagName, repo)
continue
}
log.Error("Error whilst getting commit for Tag: %s in %-v.", release.TagName, repo)
return fmt.Errorf("GetTagCommit: %v", err) return fmt.Errorf("GetTagCommit: %v", err)
} }
@@ -121,6 +130,7 @@ func fixPublisherIDforTagReleases(x *xorm.Engine) error {
user = new(User) user = new(User)
_, err = sess.Where("email=?", commit.Author.Email).Get(user) _, err = sess.Where("email=?", commit.Author.Email).Get(user)
if err != nil { if err != nil {
log.Error("Error whilst getting commit author by email: %s for Tag: %s in %-v.", commit.Author.Email, release.TagName, repo)
return err return err
} }
@@ -133,6 +143,7 @@ func fixPublisherIDforTagReleases(x *xorm.Engine) error {
release.PublisherID = user.ID release.PublisherID = user.ID
if _, err := sess.ID(release.ID).Cols("publisher_id").Update(release); err != nil { if _, err := sess.ID(release.ID).Cols("publisher_id").Update(release); err != nil {
log.Error("Error whilst updating publisher[%d] for release[%d] with tag name %s", release.PublisherID, release.ID, release.TagName)
return err return err
} }
} }

View File

@@ -14,7 +14,7 @@ func addSessionTable(x *xorm.Engine) error {
type Session struct { type Session struct {
Key string `xorm:"pk CHAR(16)"` Key string `xorm:"pk CHAR(16)"`
Data []byte `xorm:"BLOB"` Data []byte `xorm:"BLOB"`
CreatedUnix timeutil.TimeStamp Expiry timeutil.TimeStamp
} }
return x.Sync2(new(Session)) return x.Sync2(new(Session))
} }

View File

@@ -48,11 +48,11 @@ func removeInvalidLabels(x *xorm.Engine) error {
SELECT il_too.id FROM ( SELECT il_too.id FROM (
SELECT il_too_too.id SELECT il_too_too.id
FROM issue_label AS il_too_too FROM issue_label AS il_too_too
INNER JOIN label ON il_too_too.id = label.id INNER JOIN label ON il_too_too.label_id = label.id
INNER JOIN issue on issue.id = il_too_too.issue_id INNER JOIN issue on issue.id = il_too_too.issue_id
INNER JOIN repository on repository.id = issue.repo_id INNER JOIN repository on repository.id = issue.repo_id
WHERE WHERE
issue.repo_id != label.repo_id OR (label.repo_id = 0 AND label.org_id != repository.owner_id) (label.org_id = 0 AND issue.repo_id != label.repo_id) OR (label.repo_id = 0 AND label.org_id != repository.owner_id)
) AS il_too )`); err != nil { ) AS il_too )`); err != nil {
return err return err
} }
@@ -65,7 +65,7 @@ func removeInvalidLabels(x *xorm.Engine) error {
INNER JOIN issue on issue.id = com.issue_id INNER JOIN issue on issue.id = com.issue_id
INNER JOIN repository on repository.id = issue.repo_id INNER JOIN repository on repository.id = issue.repo_id
WHERE WHERE
com.type = ? AND (issue.repo_id != label.repo_id OR (label.repo_id = 0 AND label.org_id != repository.owner_id)) com.type = ? AND ((label.org_id = 0 AND issue.repo_id != label.repo_id) OR (label.repo_id = 0 AND label.org_id != repository.owner_id))
) AS il_too)`, 7); err != nil { ) AS il_too)`, 7); err != nil {
return err return err
} }

View File

@@ -319,7 +319,7 @@ func DumpDatabase(filePath, dbType string) error {
ID int64 `xorm:"pk autoincr"` ID int64 `xorm:"pk autoincr"`
Version int64 Version int64
} }
t, err := x.TableInfo(Version{}) t, err := x.TableInfo(&Version{})
if err != nil { if err != nil {
return err return err
} }

View File

@@ -25,7 +25,7 @@ func TestDumpDatabase(t *testing.T) {
ID int64 `xorm:"pk autoincr"` ID int64 `xorm:"pk autoincr"`
Version int64 Version int64
} }
assert.NoError(t, x.Sync2(Version{})) assert.NoError(t, x.Sync2(new(Version)))
for _, dbName := range setting.SupportedDatabases { for _, dbName := range setting.SupportedDatabases {
dbType := setting.GetDBTypeByName(dbName) dbType := setting.GetDBTypeByName(dbName)

View File

@@ -235,7 +235,7 @@ func deleteOAuth2Application(sess *xorm.Session, id, userid int64) error {
if deleted, err := sess.Delete(&OAuth2Application{ID: id, UID: userid}); err != nil { if deleted, err := sess.Delete(&OAuth2Application{ID: id, UID: userid}); err != nil {
return err return err
} else if deleted == 0 { } else if deleted == 0 {
return fmt.Errorf("cannot find oauth2 application") return ErrOAuthApplicationNotFound{ID: id}
} }
codes := make([]*OAuth2AuthorizationCode, 0) codes := make([]*OAuth2AuthorizationCode, 0)
// delete correlating auth codes // delete correlating auth codes
@@ -261,6 +261,7 @@ func deleteOAuth2Application(sess *xorm.Session, id, userid int64) error {
// DeleteOAuth2Application deletes the application with the given id and the grants and auth codes related to it. It checks if the userid was the creator of the app. // DeleteOAuth2Application deletes the application with the given id and the grants and auth codes related to it. It checks if the userid was the creator of the app.
func DeleteOAuth2Application(id, userid int64) error { func DeleteOAuth2Application(id, userid int64) error {
sess := x.NewSession() sess := x.NewSession()
defer sess.Close()
if err := sess.Begin(); err != nil { if err := sess.Begin(); err != nil {
return err return err
} }

View File

@@ -212,12 +212,21 @@ func (pr *PullRequest) GetDefaultMergeMessage() string {
log.Error("Cannot load issue %d for PR id %d: Error: %v", pr.IssueID, pr.ID, err) log.Error("Cannot load issue %d for PR id %d: Error: %v", pr.IssueID, pr.ID, err)
return "" return ""
} }
if err := pr.LoadBaseRepo(); err != nil {
if pr.BaseRepoID == pr.HeadRepoID { log.Error("LoadBaseRepo: %v", err)
return fmt.Sprintf("Merge pull request '%s' (#%d) from %s into %s", pr.Issue.Title, pr.Issue.Index, pr.HeadBranch, pr.BaseBranch) return ""
} }
return fmt.Sprintf("Merge pull request '%s' (#%d) from %s:%s into %s", pr.Issue.Title, pr.Issue.Index, pr.HeadRepo.FullName(), pr.HeadBranch, pr.BaseBranch) issueReference := "#"
if pr.BaseRepo.UnitEnabled(UnitTypeExternalTracker) {
issueReference = "!"
}
if pr.BaseRepoID == pr.HeadRepoID {
return fmt.Sprintf("Merge pull request '%s' (%s%d) from %s into %s", pr.Issue.Title, issueReference, pr.Issue.Index, pr.HeadBranch, pr.BaseBranch)
}
return fmt.Sprintf("Merge pull request '%s' (%s%d) from %s:%s into %s", pr.Issue.Title, issueReference, pr.Issue.Index, pr.HeadRepo.FullName(), pr.HeadBranch, pr.BaseBranch)
} }
// ReviewCount represents a count of Reviews // ReviewCount represents a count of Reviews
@@ -406,7 +415,8 @@ func (pr *PullRequest) SetMerged() (bool, error) {
return false, fmt.Errorf("Issue.changeStatus: %v", err) return false, fmt.Errorf("Issue.changeStatus: %v", err)
} }
if _, err := sess.Where("id = ?", pr.ID).Cols("has_merged, status, merged_commit_id, merger_id, merged_unix").Update(pr); err != nil { // We need to save all of the data used to compute this merge as it may have already been changed by TestPatch. FIXME: need to set some state to prevent TestPatch from running whilst we are merging.
if _, err := sess.Where("id = ?", pr.ID).Cols("has_merged, status, merge_base, merged_commit_id, merger_id, merged_unix").Update(pr); err != nil {
return false, fmt.Errorf("Failed to update pr[%d]: %v", pr.ID, err) return false, fmt.Errorf("Failed to update pr[%d]: %v", pr.ID, err)
} }

View File

@@ -234,3 +234,36 @@ func TestPullRequest_GetWorkInProgressPrefixWorkInProgress(t *testing.T) {
pr.Issue.Title = "[wip] " + original pr.Issue.Title = "[wip] " + original
assert.Equal(t, "[wip]", pr.GetWorkInProgressPrefix()) assert.Equal(t, "[wip]", pr.GetWorkInProgressPrefix())
} }
func TestPullRequest_GetDefaultMergeMessage_InternalTracker(t *testing.T) {
assert.NoError(t, PrepareTestDatabase())
pr := AssertExistsAndLoadBean(t, &PullRequest{ID: 2}).(*PullRequest)
assert.Equal(t, "Merge pull request 'issue3' (#3) from branch2 into master", pr.GetDefaultMergeMessage())
pr.BaseRepoID = 1
pr.HeadRepoID = 2
assert.Equal(t, "Merge pull request 'issue3' (#3) from user2/repo1:branch2 into master", pr.GetDefaultMergeMessage())
}
func TestPullRequest_GetDefaultMergeMessage_ExternalTracker(t *testing.T) {
assert.NoError(t, PrepareTestDatabase())
externalTracker := RepoUnit{
Type: UnitTypeExternalTracker,
Config: &ExternalTrackerConfig{
ExternalTrackerFormat: "https://someurl.com/{user}/{repo}/{issue}",
},
}
baseRepo := &Repository{Name: "testRepo", ID: 1}
baseRepo.Owner = &User{Name: "testOwner"}
baseRepo.Units = []*RepoUnit{&externalTracker}
pr := AssertExistsAndLoadBean(t, &PullRequest{ID: 2, BaseRepo: baseRepo}).(*PullRequest)
assert.Equal(t, "Merge pull request 'issue3' (!3) from branch2 into master", pr.GetDefaultMergeMessage())
pr.BaseRepoID = 1
pr.HeadRepoID = 2
assert.Equal(t, "Merge pull request 'issue3' (!3) from user2/repo1:branch2 into master", pr.GetDefaultMergeMessage())
}

View File

@@ -6,6 +6,7 @@
package models package models
import ( import (
"errors"
"fmt" "fmt"
"sort" "sort"
"strings" "strings"
@@ -117,17 +118,20 @@ func UpdateRelease(ctx DBContext, rel *Release) error {
} }
// AddReleaseAttachments adds a release attachments // AddReleaseAttachments adds a release attachments
func AddReleaseAttachments(releaseID int64, attachmentUUIDs []string) (err error) { func AddReleaseAttachments(ctx DBContext, releaseID int64, attachmentUUIDs []string) (err error) {
// Check attachments // Check attachments
attachments, err := GetAttachmentsByUUIDs(attachmentUUIDs) attachments, err := getAttachmentsByUUIDs(ctx.e, attachmentUUIDs)
if err != nil { if err != nil {
return fmt.Errorf("GetAttachmentsByUUIDs [uuids: %v]: %v", attachmentUUIDs, err) return fmt.Errorf("GetAttachmentsByUUIDs [uuids: %v]: %v", attachmentUUIDs, err)
} }
for i := range attachments { for i := range attachments {
if attachments[i].ReleaseID != 0 {
return errors.New("release permission denied")
}
attachments[i].ReleaseID = releaseID attachments[i].ReleaseID = releaseID
// No assign value could be 0, so ignore AllCols(). // No assign value could be 0, so ignore AllCols().
if _, err = x.ID(attachments[i].ID).Update(attachments[i]); err != nil { if _, err = ctx.e.ID(attachments[i].ID).Update(attachments[i]); err != nil {
return fmt.Errorf("update attachment [%d]: %v", attachments[i].ID, err) return fmt.Errorf("update attachment [%d]: %v", attachments[i].ID, err)
} }
} }

View File

@@ -749,7 +749,7 @@ func (repo *Repository) updateSize(e Engine) error {
} }
repo.Size = size repo.Size = size
_, err = e.ID(repo.ID).Cols("size").Update(repo) _, err = e.ID(repo.ID).Cols("size").NoAutoTime().Update(repo)
return err return err
} }
@@ -1454,23 +1454,26 @@ func DeleteRepository(doer *User, uid, repoID int64) error {
if err := deleteBeans(sess, if err := deleteBeans(sess,
&Access{RepoID: repo.ID}, &Access{RepoID: repo.ID},
&Action{RepoID: repo.ID}, &Action{RepoID: repo.ID},
&Watch{RepoID: repoID},
&Star{RepoID: repoID},
&Mirror{RepoID: repoID},
&Milestone{RepoID: repoID},
&Release{RepoID: repoID},
&Collaboration{RepoID: repoID}, &Collaboration{RepoID: repoID},
&PullRequest{BaseRepoID: repoID},
&RepoUnit{RepoID: repoID},
&RepoRedirect{RedirectRepoID: repoID},
&Webhook{RepoID: repoID},
&HookTask{RepoID: repoID},
&Notification{RepoID: repoID},
&CommitStatus{RepoID: repoID},
&RepoIndexerStatus{RepoID: repoID},
&LanguageStat{RepoID: repoID},
&Comment{RefRepoID: repoID}, &Comment{RefRepoID: repoID},
&CommitStatus{RepoID: repoID},
&DeletedBranch{RepoID: repoID},
&HookTask{RepoID: repoID},
&LFSLock{RepoID: repoID},
&LanguageStat{RepoID: repoID},
&Milestone{RepoID: repoID},
&Mirror{RepoID: repoID},
&Notification{RepoID: repoID},
&ProtectedBranch{RepoID: repoID},
&PullRequest{BaseRepoID: repoID},
&Release{RepoID: repoID},
&RepoIndexerStatus{RepoID: repoID},
&RepoRedirect{RedirectRepoID: repoID},
&RepoUnit{RepoID: repoID},
&Star{RepoID: repoID},
&Task{RepoID: repoID}, &Task{RepoID: repoID},
&Watch{RepoID: repoID},
&Webhook{RepoID: repoID},
); err != nil { ); err != nil {
return fmt.Errorf("deleteBeans: %v", err) return fmt.Errorf("deleteBeans: %v", err)
} }
@@ -1486,10 +1489,6 @@ func DeleteRepository(doer *User, uid, repoID int64) error {
return err return err
} }
if _, err := sess.Where("repo_id = ?", repoID).Delete(new(RepoUnit)); err != nil {
return err
}
if repo.IsFork { if repo.IsFork {
if _, err := sess.Exec("UPDATE `repository` SET num_forks=num_forks-1 WHERE id=?", repo.ForkID); err != nil { if _, err := sess.Exec("UPDATE `repository` SET num_forks=num_forks-1 WHERE id=?", repo.ForkID); err != nil {
return fmt.Errorf("decrease fork count: %v", err) return fmt.Errorf("decrease fork count: %v", err)

View File

@@ -12,6 +12,7 @@ import (
"code.gitea.io/gitea/modules/util" "code.gitea.io/gitea/modules/util"
"xorm.io/builder" "xorm.io/builder"
"xorm.io/xorm"
) )
// RepositoryListDefaultPageSize is the default number of repositories // RepositoryListDefaultPageSize is the default number of repositories
@@ -363,6 +364,35 @@ func SearchRepository(opts *SearchRepoOptions) (RepositoryList, int64, error) {
// SearchRepositoryByCondition search repositories by condition // SearchRepositoryByCondition search repositories by condition
func SearchRepositoryByCondition(opts *SearchRepoOptions, cond builder.Cond, loadAttributes bool) (RepositoryList, int64, error) { func SearchRepositoryByCondition(opts *SearchRepoOptions, cond builder.Cond, loadAttributes bool) (RepositoryList, int64, error) {
sess, count, err := searchRepositoryByCondition(opts, cond)
if err != nil {
return nil, 0, err
}
defer sess.Close()
defaultSize := 50
if opts.PageSize > 0 {
defaultSize = opts.PageSize
}
repos := make(RepositoryList, 0, defaultSize)
if err := sess.Find(&repos); err != nil {
return nil, 0, fmt.Errorf("Repo: %v", err)
}
if opts.PageSize <= 0 {
count = int64(len(repos))
}
if loadAttributes {
if err := repos.loadAttributes(sess); err != nil {
return nil, 0, fmt.Errorf("LoadAttributes: %v", err)
}
}
return repos, count, nil
}
func searchRepositoryByCondition(opts *SearchRepoOptions, cond builder.Cond) (*xorm.Session, int64, error) {
if opts.Page <= 0 { if opts.Page <= 0 {
opts.Page = 1 opts.Page = 1
} }
@@ -376,31 +406,24 @@ func SearchRepositoryByCondition(opts *SearchRepoOptions, cond builder.Cond, loa
} }
sess := x.NewSession() sess := x.NewSession()
defer sess.Close()
count, err := sess. var count int64
if opts.PageSize > 0 {
var err error
count, err = sess.
Where(cond). Where(cond).
Count(new(Repository)) Count(new(Repository))
if err != nil { if err != nil {
_ = sess.Close()
return nil, 0, fmt.Errorf("Count: %v", err) return nil, 0, fmt.Errorf("Count: %v", err)
} }
}
repos := make(RepositoryList, 0, opts.PageSize)
sess.Where(cond).OrderBy(opts.OrderBy.String()) sess.Where(cond).OrderBy(opts.OrderBy.String())
if opts.PageSize > 0 { if opts.PageSize > 0 {
sess.Limit(opts.PageSize, (opts.Page-1)*opts.PageSize) sess.Limit(opts.PageSize, (opts.Page-1)*opts.PageSize)
} }
if err = sess.Find(&repos); err != nil { return sess, count, nil
return nil, 0, fmt.Errorf("Repo: %v", err)
}
if loadAttributes {
if err = repos.loadAttributes(sess); err != nil {
return nil, 0, fmt.Errorf("LoadAttributes: %v", err)
}
}
return repos, count, nil
} }
// accessibleRepositoryCondition takes a user a returns a condition for checking if a repository is accessible // accessibleRepositoryCondition takes a user a returns a condition for checking if a repository is accessible
@@ -456,6 +479,33 @@ func SearchRepositoryByName(opts *SearchRepoOptions) (RepositoryList, int64, err
return SearchRepository(opts) return SearchRepository(opts)
} }
// SearchRepositoryIDs takes keyword and part of repository name to search,
// it returns results in given range and number of total results.
func SearchRepositoryIDs(opts *SearchRepoOptions) ([]int64, int64, error) {
opts.IncludeDescription = false
cond := SearchRepositoryCondition(opts)
sess, count, err := searchRepositoryByCondition(opts, cond)
if err != nil {
return nil, 0, err
}
defer sess.Close()
defaultSize := 50
if opts.PageSize > 0 {
defaultSize = opts.PageSize
}
ids := make([]int64, 0, defaultSize)
err = sess.Select("id").Table("repository").Find(&ids)
if opts.PageSize <= 0 {
count = int64(len(ids))
}
return ids, count, err
}
// AccessibleRepoIDsQuery queries accessible repository ids. Usable as a subquery wherever repo ids need to be filtered. // AccessibleRepoIDsQuery queries accessible repository ids. Usable as a subquery wherever repo ids need to be filtered.
func AccessibleRepoIDsQuery(user *User) *builder.Builder { func AccessibleRepoIDsQuery(user *User) *builder.Builder {
// NB: Please note this code needs to still work if user is nil // NB: Please note this code needs to still work if user is nil

View File

@@ -330,10 +330,10 @@ func TransferOwnership(doer *User, newOwnerName string, repo *Repository) (err e
SELECT il_too.id FROM ( SELECT il_too.id FROM (
SELECT il_too_too.id SELECT il_too_too.id
FROM issue_label AS il_too_too FROM issue_label AS il_too_too
INNER JOIN label ON il_too_too.id = label.id INNER JOIN label ON il_too_too.label_id = label.id
INNER JOIN issue on issue.id = il_too_too.issue_id INNER JOIN issue on issue.id = il_too_too.issue_id
WHERE WHERE
issue.repo_id = ? AND (issue.repo_id != label.repo_id OR (label.repo_id = 0 AND label.org_id != ?)) issue.repo_id = ? AND ((label.org_id = 0 AND issue.repo_id != label.repo_id) OR (label.repo_id = 0 AND label.org_id != ?))
) AS il_too )`, repo.ID, newOwner.ID); err != nil { ) AS il_too )`, repo.ID, newOwner.ID); err != nil {
return fmt.Errorf("Unable to remove old org labels: %v", err) return fmt.Errorf("Unable to remove old org labels: %v", err)
} }
@@ -343,9 +343,9 @@ func TransferOwnership(doer *User, newOwnerName string, repo *Repository) (err e
SELECT com.id SELECT com.id
FROM comment AS com FROM comment AS com
INNER JOIN label ON com.label_id = label.id INNER JOIN label ON com.label_id = label.id
INNER JOIN issue on issue.id = com.issue_id INNER JOIN issue ON issue.id = com.issue_id
WHERE WHERE
com.type = ? AND issue.repo_id = ? AND (issue.repo_id != label.repo_id OR (label.repo_id = 0 AND label.org_id != ?)) com.type = ? AND issue.repo_id = ? AND ((label.org_id = 0 AND issue.repo_id != label.repo_id) OR (label.repo_id = 0 AND label.org_id != ?))
) AS il_too)`, CommentTypeLabel, repo.ID, newOwner.ID); err != nil { ) AS il_too)`, CommentTypeLabel, repo.ID, newOwner.ID); err != nil {
return fmt.Errorf("Unable to remove old org label comments: %v", err) return fmt.Errorf("Unable to remove old org label comments: %v", err)
} }

View File

@@ -566,7 +566,11 @@ func DismissReview(review *Review, isDismiss bool) (err error) {
review.Dismissed = isDismiss review.Dismissed = isDismiss
_, err = x.Cols("dismissed").Update(review) if review.ID == 0 {
return ErrReviewNotExist{}
}
_, err = x.ID(review.ID).Cols("dismissed").Update(review)
return return
} }

View File

@@ -143,11 +143,57 @@ func TestGetReviewersByIssueID(t *testing.T) {
} }
func TestDismissReview(t *testing.T) { func TestDismissReview(t *testing.T) {
review1 := AssertExistsAndLoadBean(t, &Review{ID: 9}).(*Review) assert.NoError(t, PrepareTestDatabase())
review2 := AssertExistsAndLoadBean(t, &Review{ID: 11}).(*Review)
assert.NoError(t, DismissReview(review1, true)) rejectReviewExample := AssertExistsAndLoadBean(t, &Review{ID: 9}).(*Review)
assert.NoError(t, DismissReview(review2, true)) requestReviewExample := AssertExistsAndLoadBean(t, &Review{ID: 11}).(*Review)
assert.NoError(t, DismissReview(review2, true)) approveReviewExample := AssertExistsAndLoadBean(t, &Review{ID: 8}).(*Review)
assert.NoError(t, DismissReview(review2, false)) assert.False(t, rejectReviewExample.Dismissed)
assert.NoError(t, DismissReview(review2, false)) assert.False(t, requestReviewExample.Dismissed)
assert.False(t, approveReviewExample.Dismissed)
assert.NoError(t, DismissReview(rejectReviewExample, true))
rejectReviewExample = AssertExistsAndLoadBean(t, &Review{ID: 9}).(*Review)
requestReviewExample = AssertExistsAndLoadBean(t, &Review{ID: 11}).(*Review)
assert.True(t, rejectReviewExample.Dismissed)
assert.False(t, requestReviewExample.Dismissed)
assert.NoError(t, DismissReview(requestReviewExample, true))
rejectReviewExample = AssertExistsAndLoadBean(t, &Review{ID: 9}).(*Review)
requestReviewExample = AssertExistsAndLoadBean(t, &Review{ID: 11}).(*Review)
assert.True(t, rejectReviewExample.Dismissed)
assert.False(t, requestReviewExample.Dismissed)
assert.False(t, approveReviewExample.Dismissed)
assert.NoError(t, DismissReview(requestReviewExample, true))
rejectReviewExample = AssertExistsAndLoadBean(t, &Review{ID: 9}).(*Review)
requestReviewExample = AssertExistsAndLoadBean(t, &Review{ID: 11}).(*Review)
assert.True(t, rejectReviewExample.Dismissed)
assert.False(t, requestReviewExample.Dismissed)
assert.False(t, approveReviewExample.Dismissed)
assert.NoError(t, DismissReview(requestReviewExample, false))
rejectReviewExample = AssertExistsAndLoadBean(t, &Review{ID: 9}).(*Review)
requestReviewExample = AssertExistsAndLoadBean(t, &Review{ID: 11}).(*Review)
assert.True(t, rejectReviewExample.Dismissed)
assert.False(t, requestReviewExample.Dismissed)
assert.False(t, approveReviewExample.Dismissed)
assert.NoError(t, DismissReview(requestReviewExample, false))
rejectReviewExample = AssertExistsAndLoadBean(t, &Review{ID: 9}).(*Review)
requestReviewExample = AssertExistsAndLoadBean(t, &Review{ID: 11}).(*Review)
assert.True(t, rejectReviewExample.Dismissed)
assert.False(t, requestReviewExample.Dismissed)
assert.False(t, approveReviewExample.Dismissed)
assert.NoError(t, DismissReview(rejectReviewExample, false))
assert.False(t, rejectReviewExample.Dismissed)
assert.False(t, requestReviewExample.Dismissed)
assert.False(t, approveReviewExample.Dismissed)
assert.NoError(t, DismissReview(approveReviewExample, true))
assert.False(t, rejectReviewExample.Dismissed)
assert.False(t, requestReviewExample.Dismissed)
assert.True(t, approveReviewExample.Dismissed)
} }

View File

@@ -117,6 +117,6 @@ func CountSessions() (int64, error) {
// CleanupSessions cleans up expired sessions // CleanupSessions cleans up expired sessions
func CleanupSessions(maxLifetime int64) error { func CleanupSessions(maxLifetime int64) error {
_, err := x.Where("created_unix <= ?", timeutil.TimeStampNow().Add(-maxLifetime)).Delete(&Session{}) _, err := x.Where("expiry <= ?", timeutil.TimeStampNow().Add(-maxLifetime)).Delete(&Session{})
return err return err
} }

View File

@@ -239,10 +239,10 @@ func (u *User) GetEmail() string {
return u.Email return u.Email
} }
// GetAllUsers returns a slice of all users found in DB. // GetAllUsers returns a slice of all individual users found in DB.
func GetAllUsers() ([]*User, error) { func GetAllUsers() ([]*User, error) {
users := make([]*User, 0) users := make([]*User, 0)
return users, x.OrderBy("id").Find(&users) return users, x.OrderBy("id").Where("type = ?", UserTypeIndividual).Find(&users)
} }
// IsLocal returns true if user login type is LoginPlain. // IsLocal returns true if user login type is LoginPlain.

View File

@@ -82,6 +82,9 @@ func (u *User) RealSizedAvatarLink(size int) string {
if u.Avatar == "" { if u.Avatar == "" {
return DefaultAvatarLink() return DefaultAvatarLink()
} }
if size > 0 {
return setting.AppSubURL + "/avatars/" + u.Avatar + "?size=" + strconv.Itoa(size)
}
return setting.AppSubURL + "/avatars/" + u.Avatar return setting.AppSubURL + "/avatars/" + u.Avatar
case setting.DisableGravatar, setting.OfflineMode: case setting.DisableGravatar, setting.OfflineMode:
if u.Avatar == "" { if u.Avatar == "" {
@@ -89,7 +92,9 @@ func (u *User) RealSizedAvatarLink(size int) string {
log.Error("GenerateRandomAvatar: %v", err) log.Error("GenerateRandomAvatar: %v", err)
} }
} }
if size > 0 {
return setting.AppSubURL + "/avatars/" + u.Avatar + "?size=" + strconv.Itoa(size)
}
return setting.AppSubURL + "/avatars/" + u.Avatar return setting.AppSubURL + "/avatars/" + u.Avatar
} }
return SizedAvatarLink(u.AvatarEmail, size) return SizedAvatarLink(u.AvatarEmail, size)

70
modules/analyze/vendor.go Normal file
View File

@@ -0,0 +1,70 @@
// Copyright 2021 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package analyze
import (
"regexp"
"sort"
"strings"
"github.com/go-enry/go-enry/v2/data"
)
var isVendorRegExp *regexp.Regexp
func init() {
matchers := data.VendorMatchers
caretStrings := make([]string, 0, 10)
caretShareStrings := make([]string, 0, 10)
matcherStrings := make([]string, 0, len(matchers))
for _, matcher := range matchers {
str := matcher.String()
if str[0] == '^' {
caretStrings = append(caretStrings, str[1:])
} else if str[0:5] == "(^|/)" {
caretShareStrings = append(caretShareStrings, str[5:])
} else {
matcherStrings = append(matcherStrings, str)
}
}
sort.Strings(caretShareStrings)
sort.Strings(caretStrings)
sort.Strings(matcherStrings)
sb := &strings.Builder{}
sb.WriteString("(?:^(?:")
sb.WriteString(caretStrings[0])
for _, matcher := range caretStrings[1:] {
sb.WriteString(")|(?:")
sb.WriteString(matcher)
}
sb.WriteString("))")
sb.WriteString("|")
sb.WriteString("(?:(?:^|/)(?:")
sb.WriteString(caretShareStrings[0])
for _, matcher := range caretShareStrings[1:] {
sb.WriteString(")|(?:")
sb.WriteString(matcher)
}
sb.WriteString("))")
sb.WriteString("|")
sb.WriteString("(?:")
sb.WriteString(matcherStrings[0])
for _, matcher := range matcherStrings[1:] {
sb.WriteString(")|(?:")
sb.WriteString(matcher)
}
sb.WriteString(")")
combined := sb.String()
isVendorRegExp = regexp.MustCompile(combined)
}
// IsVendor returns whether or not path is a vendor path.
func IsVendor(path string) bool {
return isVendorRegExp.MatchString(path)
}

View File

@@ -0,0 +1,42 @@
// Copyright 2021 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package analyze
import "testing"
func TestIsVendor(t *testing.T) {
tests := []struct {
path string
want bool
}{
{"cache/", true},
{"random/cache/", true},
{"cache", false},
{"dependencies/", true},
{"Dependencies/", true},
{"dependency/", false},
{"dist/", true},
{"dist", false},
{"random/dist/", true},
{"random/dist", false},
{"deps/", true},
{"configure", true},
{"a/configure", true},
{"config.guess", true},
{"config.guess/", false},
{".vscode/", true},
{"doc/_build/", true},
{"a/docs/_build/", true},
{"a/dasdocs/_build-vsdoc.js", true},
{"a/dasdocs/_build-vsdoc.j", false},
}
for _, tt := range tests {
t.Run(tt.path, func(t *testing.T) {
if got := IsVendor(tt.path); got != tt.want {
t.Errorf("IsVendor() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -6,9 +6,9 @@
package context package context
import ( import (
"context"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"net/http"
"net/url" "net/url"
"path" "path"
"strings" "strings"
@@ -394,13 +394,10 @@ func RepoIDAssignment() func(ctx *Context) {
} }
// RepoAssignment returns a middleware to handle repository assignment // RepoAssignment returns a middleware to handle repository assignment
func RepoAssignment() func(http.Handler) http.Handler { func RepoAssignment(ctx *Context) (cancel context.CancelFunc) {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
var ( var (
owner *models.User owner *models.User
err error err error
ctx = GetContext(req)
) )
userName := ctx.Params(":username") userName := ctx.Params(":username")
@@ -533,17 +530,16 @@ func RepoAssignment() func(http.Handler) http.Handler {
ctx.Repo.GitRepo = gitRepo ctx.Repo.GitRepo = gitRepo
// We opened it, we should close it // We opened it, we should close it
defer func() { cancel = func() {
// If it's been set to nil then assume someone else has closed it. // If it's been set to nil then assume someone else has closed it.
if ctx.Repo.GitRepo != nil { if ctx.Repo.GitRepo != nil {
ctx.Repo.GitRepo.Close() ctx.Repo.GitRepo.Close()
} }
}() }
// Stop at this point when the repo is empty. // Stop at this point when the repo is empty.
if ctx.Repo.Repository.IsEmpty { if ctx.Repo.Repository.IsEmpty {
ctx.Data["BranchName"] = ctx.Repo.Repository.DefaultBranch ctx.Data["BranchName"] = ctx.Repo.Repository.DefaultBranch
next.ServeHTTP(w, req)
return return
} }
@@ -624,9 +620,7 @@ func RepoAssignment() func(http.Handler) http.Handler {
ctx.Data["GoDocDirectory"] = prefix + "{/dir}" ctx.Data["GoDocDirectory"] = prefix + "{/dir}"
ctx.Data["GoDocFile"] = prefix + "{/dir}/{file}#L{line}" ctx.Data["GoDocFile"] = prefix + "{/dir}/{file}#L{line}"
} }
next.ServeHTTP(w, req) return
})
}
} }
// RepoRefType type of repo reference // RepoRefType type of repo reference
@@ -651,7 +645,7 @@ const (
// RepoRef handles repository reference names when the ref name is not // RepoRef handles repository reference names when the ref name is not
// explicitly given // explicitly given
func RepoRef() func(http.Handler) http.Handler { func RepoRef() func(*Context) context.CancelFunc {
// since no ref name is explicitly specified, ok to just use branch // since no ref name is explicitly specified, ok to just use branch
return RepoRefByType(RepoRefBranch) return RepoRefByType(RepoRefBranch)
} }
@@ -730,10 +724,8 @@ func getRefName(ctx *Context, pathType RepoRefType) string {
// RepoRefByType handles repository reference name for a specific type // RepoRefByType handles repository reference name for a specific type
// of repository reference // of repository reference
func RepoRefByType(refType RepoRefType) func(http.Handler) http.Handler { func RepoRefByType(refType RepoRefType, ignoreNotExistErr ...bool) func(*Context) context.CancelFunc {
return func(next http.Handler) http.Handler { return func(ctx *Context) (cancel context.CancelFunc) {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
ctx := GetContext(req)
// Empty repository does not have reference information. // Empty repository does not have reference information.
if ctx.Repo.Repository.IsEmpty { if ctx.Repo.Repository.IsEmpty {
return return
@@ -752,12 +744,12 @@ func RepoRefByType(refType RepoRefType) func(http.Handler) http.Handler {
return return
} }
// We opened it, we should close it // We opened it, we should close it
defer func() { cancel = func() {
// If it's been set to nil then assume someone else has closed it. // If it's been set to nil then assume someone else has closed it.
if ctx.Repo.GitRepo != nil { if ctx.Repo.GitRepo != nil {
ctx.Repo.GitRepo.Close() ctx.Repo.GitRepo.Close()
} }
}() }
} }
// Get default branch. // Get default branch.
@@ -821,6 +813,9 @@ func RepoRefByType(refType RepoRefType) func(http.Handler) http.Handler {
util.URLJoin(setting.AppURL, strings.Replace(ctx.Req.URL.RequestURI(), refName, ctx.Repo.Commit.ID.String(), 1)))) util.URLJoin(setting.AppURL, strings.Replace(ctx.Req.URL.RequestURI(), refName, ctx.Repo.Commit.ID.String(), 1))))
} }
} else { } else {
if len(ignoreNotExistErr) > 0 && ignoreNotExistErr[0] {
return
}
ctx.NotFound("RepoRef invalid repo", fmt.Errorf("branch or tag not exist: %s", refName)) ctx.NotFound("RepoRef invalid repo", fmt.Errorf("branch or tag not exist: %s", refName))
return return
} }
@@ -851,9 +846,7 @@ func RepoRefByType(refType RepoRefType) func(http.Handler) http.Handler {
return return
} }
ctx.Data["CommitsCount"] = ctx.Repo.CommitsCount ctx.Data["CommitsCount"] = ctx.Repo.CommitsCount
return
next.ServeHTTP(w, req)
})
} }
} }

View File

@@ -4,7 +4,9 @@
package context package context
import "net/http" import (
"net/http"
)
// ResponseWriter represents a response writer for HTTP // ResponseWriter represents a response writer for HTTP
type ResponseWriter interface { type ResponseWriter interface {
@@ -60,9 +62,11 @@ func (r *Response) WriteHeader(statusCode int) {
} }
r.beforeExecuted = true r.beforeExecuted = true
} }
if r.status == 0 {
r.status = statusCode r.status = statusCode
r.ResponseWriter.WriteHeader(statusCode) r.ResponseWriter.WriteHeader(statusCode)
} }
}
// Flush flush cached data // Flush flush cached data
func (r *Response) Flush() { func (r *Response) Flush() {

View File

@@ -47,6 +47,11 @@ func ToNotificationThread(n *models.Notification) *api.NotificationThread {
if err == nil && comment != nil { if err == nil && comment != nil {
result.Subject.LatestCommentURL = comment.APIURL() result.Subject.LatestCommentURL = comment.APIURL()
} }
pr, _ := n.Issue.GetPullRequest()
if pr != nil && pr.HasMerged {
result.Subject.State = "merged"
}
} }
case models.NotificationSourceCommit: case models.NotificationSourceCommit:
result.Subject = &api.NotificationSubject{ result.Subject = &api.NotificationSubject{

View File

@@ -85,18 +85,17 @@ func ToPullReviewCommentList(review *models.Review, doer *models.User) ([]*api.P
apiComments := make([]*api.PullReviewComment, 0, len(review.CodeComments)) apiComments := make([]*api.PullReviewComment, 0, len(review.CodeComments))
auth := false
if doer != nil {
auth = doer.IsAdmin || doer.ID == review.ReviewerID
}
for _, lines := range review.CodeComments { for _, lines := range review.CodeComments {
for _, comments := range lines { for _, comments := range lines {
for _, comment := range comments { for _, comment := range comments {
auth := false
if doer != nil {
auth = doer.IsAdmin || doer.ID == comment.Poster.ID
}
apiComment := &api.PullReviewComment{ apiComment := &api.PullReviewComment{
ID: comment.ID, ID: comment.ID,
Body: comment.Content, Body: comment.Content,
Reviewer: ToUser(review.Reviewer, doer != nil, auth), Reviewer: ToUser(comment.Poster, doer != nil, auth),
ReviewID: review.ID, ReviewID: review.ID,
Created: comment.CreatedUnix.AsTime(), Created: comment.CreatedUnix.AsTime(),
Updated: comment.UpdatedUnix.AsTime(), Updated: comment.UpdatedUnix.AsTime(),

View File

@@ -23,13 +23,13 @@ func checkDBConsistency(logger log.Logger, autofix bool) error {
// find labels without existing repo or org // find labels without existing repo or org
count, err := models.CountOrphanedLabels() count, err := models.CountOrphanedLabels()
if err != nil { if err != nil {
logger.Critical("Error: %v whilst counting orphaned labels") logger.Critical("Error: %v whilst counting orphaned labels", err)
return err return err
} }
if count > 0 { if count > 0 {
if autofix { if autofix {
if err = models.DeleteOrphanedLabels(); err != nil { if err = models.DeleteOrphanedLabels(); err != nil {
logger.Critical("Error: %v whilst deleting orphaned labels") logger.Critical("Error: %v whilst deleting orphaned labels", err)
return err return err
} }
logger.Info("%d labels without existing repository/organisation deleted", count) logger.Info("%d labels without existing repository/organisation deleted", count)
@@ -41,13 +41,13 @@ func checkDBConsistency(logger log.Logger, autofix bool) error {
// find IssueLabels without existing label // find IssueLabels without existing label
count, err = models.CountOrphanedIssueLabels() count, err = models.CountOrphanedIssueLabels()
if err != nil { if err != nil {
logger.Critical("Error: %v whilst counting orphaned issue_labels") logger.Critical("Error: %v whilst counting orphaned issue_labels", err)
return err return err
} }
if count > 0 { if count > 0 {
if autofix { if autofix {
if err = models.DeleteOrphanedIssueLabels(); err != nil { if err = models.DeleteOrphanedIssueLabels(); err != nil {
logger.Critical("Error: %v whilst deleting orphaned issue_labels") logger.Critical("Error: %v whilst deleting orphaned issue_labels", err)
return err return err
} }
logger.Info("%d issue_labels without existing label deleted", count) logger.Info("%d issue_labels without existing label deleted", count)
@@ -59,13 +59,13 @@ func checkDBConsistency(logger log.Logger, autofix bool) error {
// find issues without existing repository // find issues without existing repository
count, err = models.CountOrphanedIssues() count, err = models.CountOrphanedIssues()
if err != nil { if err != nil {
logger.Critical("Error: %v whilst counting orphaned issues") logger.Critical("Error: %v whilst counting orphaned issues", err)
return err return err
} }
if count > 0 { if count > 0 {
if autofix { if autofix {
if err = models.DeleteOrphanedIssues(); err != nil { if err = models.DeleteOrphanedIssues(); err != nil {
logger.Critical("Error: %v whilst deleting orphaned issues") logger.Critical("Error: %v whilst deleting orphaned issues", err)
return err return err
} }
logger.Info("%d issues without existing repository deleted", count) logger.Info("%d issues without existing repository deleted", count)
@@ -77,13 +77,13 @@ func checkDBConsistency(logger log.Logger, autofix bool) error {
// find pulls without existing issues // find pulls without existing issues
count, err = models.CountOrphanedObjects("pull_request", "issue", "pull_request.issue_id=issue.id") count, err = models.CountOrphanedObjects("pull_request", "issue", "pull_request.issue_id=issue.id")
if err != nil { if err != nil {
logger.Critical("Error: %v whilst counting orphaned objects") logger.Critical("Error: %v whilst counting orphaned objects", err)
return err return err
} }
if count > 0 { if count > 0 {
if autofix { if autofix {
if err = models.DeleteOrphanedObjects("pull_request", "issue", "pull_request.issue_id=issue.id"); err != nil { if err = models.DeleteOrphanedObjects("pull_request", "issue", "pull_request.issue_id=issue.id"); err != nil {
logger.Critical("Error: %v whilst deleting orphaned objects") logger.Critical("Error: %v whilst deleting orphaned objects", err)
return err return err
} }
logger.Info("%d pull requests without existing issue deleted", count) logger.Info("%d pull requests without existing issue deleted", count)
@@ -95,13 +95,13 @@ func checkDBConsistency(logger log.Logger, autofix bool) error {
// find tracked times without existing issues/pulls // find tracked times without existing issues/pulls
count, err = models.CountOrphanedObjects("tracked_time", "issue", "tracked_time.issue_id=issue.id") count, err = models.CountOrphanedObjects("tracked_time", "issue", "tracked_time.issue_id=issue.id")
if err != nil { if err != nil {
logger.Critical("Error: %v whilst counting orphaned objects") logger.Critical("Error: %v whilst counting orphaned objects", err)
return err return err
} }
if count > 0 { if count > 0 {
if autofix { if autofix {
if err = models.DeleteOrphanedObjects("tracked_time", "issue", "tracked_time.issue_id=issue.id"); err != nil { if err = models.DeleteOrphanedObjects("tracked_time", "issue", "tracked_time.issue_id=issue.id"); err != nil {
logger.Critical("Error: %v whilst deleting orphaned objects") logger.Critical("Error: %v whilst deleting orphaned objects", err)
return err return err
} }
logger.Info("%d tracked times without existing issue deleted", count) logger.Info("%d tracked times without existing issue deleted", count)
@@ -113,14 +113,14 @@ func checkDBConsistency(logger log.Logger, autofix bool) error {
// find null archived repositories // find null archived repositories
count, err = models.CountNullArchivedRepository() count, err = models.CountNullArchivedRepository()
if err != nil { if err != nil {
logger.Critical("Error: %v whilst counting null archived repositories") logger.Critical("Error: %v whilst counting null archived repositories", err)
return err return err
} }
if count > 0 { if count > 0 {
if autofix { if autofix {
updatedCount, err := models.FixNullArchivedRepository() updatedCount, err := models.FixNullArchivedRepository()
if err != nil { if err != nil {
logger.Critical("Error: %v whilst fixing null archived repositories") logger.Critical("Error: %v whilst fixing null archived repositories", err)
return err return err
} }
logger.Info("%d repositories with null is_archived updated", updatedCount) logger.Info("%d repositories with null is_archived updated", updatedCount)
@@ -132,14 +132,14 @@ func checkDBConsistency(logger log.Logger, autofix bool) error {
// find label comments with empty labels // find label comments with empty labels
count, err = models.CountCommentTypeLabelWithEmptyLabel() count, err = models.CountCommentTypeLabelWithEmptyLabel()
if err != nil { if err != nil {
logger.Critical("Error: %v whilst counting label comments with empty labels") logger.Critical("Error: %v whilst counting label comments with empty labels", err)
return err return err
} }
if count > 0 { if count > 0 {
if autofix { if autofix {
updatedCount, err := models.FixCommentTypeLabelWithEmptyLabel() updatedCount, err := models.FixCommentTypeLabelWithEmptyLabel()
if err != nil { if err != nil {
logger.Critical("Error: %v whilst removing label comments with empty labels") logger.Critical("Error: %v whilst removing label comments with empty labels", err)
return err return err
} }
logger.Info("%d label comments with empty labels removed", updatedCount) logger.Info("%d label comments with empty labels removed", updatedCount)
@@ -191,13 +191,14 @@ func checkDBConsistency(logger log.Logger, autofix bool) error {
if setting.Database.UsePostgreSQL { if setting.Database.UsePostgreSQL {
count, err = models.CountBadSequences() count, err = models.CountBadSequences()
if err != nil { if err != nil {
logger.Critical("Error: %v whilst checking sequence values") logger.Critical("Error: %v whilst checking sequence values", err)
return err
} }
if count > 0 { if count > 0 {
if autofix { if autofix {
err := models.FixBadSequences() err := models.FixBadSequences()
if err != nil { if err != nil {
logger.Critical("Error: %v whilst attempting to fix sequences") logger.Critical("Error: %v whilst attempting to fix sequences", err)
return err return err
} }
logger.Info("%d sequences updated", count) logger.Info("%d sequences updated", count)
@@ -207,6 +208,60 @@ func checkDBConsistency(logger log.Logger, autofix bool) error {
} }
} }
// find protected branches without existing repository
count, err = models.CountOrphanedObjects("protected_branch", "repository", "protected_branch.repo_id=repository.id")
if err != nil {
logger.Critical("Error: %v whilst counting orphaned objects", err)
return err
}
if count > 0 {
if autofix {
if err = models.DeleteOrphanedObjects("protected_branch", "repository", "protected_branch.repo_id=repository.id"); err != nil {
logger.Critical("Error: %v whilst deleting orphaned objects", err)
return err
}
logger.Info("%d protected branches without existing repository deleted", count)
} else {
logger.Warn("%d protected branches without existing repository", count)
}
}
// find deleted branches without existing repository
count, err = models.CountOrphanedObjects("deleted_branch", "repository", "deleted_branch.repo_id=repository.id")
if err != nil {
logger.Critical("Error: %v whilst counting orphaned objects", err)
return err
}
if count > 0 {
if autofix {
if err = models.DeleteOrphanedObjects("deleted_branch", "repository", "deleted_branch.repo_id=repository.id"); err != nil {
logger.Critical("Error: %v whilst deleting orphaned objects", err)
return err
}
logger.Info("%d deleted branches without existing repository deleted", count)
} else {
logger.Warn("%d deleted branches without existing repository", count)
}
}
// find LFS locks without existing repository
count, err = models.CountOrphanedObjects("lfs_lock", "repository", "lfs_lock.repo_id=repository.id")
if err != nil {
logger.Critical("Error: %v whilst counting orphaned objects", err)
return err
}
if count > 0 {
if autofix {
if err = models.DeleteOrphanedObjects("lfs_lock", "repository", "lfs_lock.repo_id=repository.id"); err != nil {
logger.Critical("Error: %v whilst deleting orphaned objects", err)
return err
}
logger.Info("%d LFS locks without existing repository deleted", count)
} else {
logger.Warn("%d LFS locks without existing repository", count)
}
}
return nil return nil
} }

View File

@@ -149,10 +149,10 @@ headerLoop:
// constant hextable to help quickly convert between 20byte and 40byte hashes // constant hextable to help quickly convert between 20byte and 40byte hashes
const hextable = "0123456789abcdef" const hextable = "0123456789abcdef"
// to40ByteSHA converts a 20-byte SHA in a 40-byte slice into a 40-byte sha in place // To40ByteSHA converts a 20-byte SHA in a 40-byte slice into a 40-byte sha in place
// without allocations. This is at least 100x quicker that hex.EncodeToString // without allocations. This is at least 100x quicker that hex.EncodeToString
// NB This requires that sha is a 40-byte slice // NB This requires that sha is a 40-byte slice
func to40ByteSHA(sha []byte) []byte { func To40ByteSHA(sha []byte) []byte {
for i := 19; i >= 0; i-- { for i := 19; i >= 0; i-- {
v := sha[i] v := sha[i]
vhi, vlo := v>>4, v&0x0f vhi, vlo := v>>4, v&0x0f

View File

@@ -7,6 +7,7 @@ package git
import ( import (
"io/ioutil" "io/ioutil"
"path/filepath"
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -14,32 +15,15 @@ import (
) )
func TestBlob_Data(t *testing.T) { func TestBlob_Data(t *testing.T) {
output := `Copyright (c) 2016 The Gitea Authors output := "file2\n"
Copyright (c) 2015 The Gogs Authors bareRepo1Path := filepath.Join(testReposDir, "repo1_bare")
repo, err := OpenRepository(bareRepo1Path)
Permission is hereby granted, free of charge, to any person obtaining a copy if !assert.NoError(t, err) {
of this software and associated documentation files (the "Software"), to deal t.Fatal()
in the Software without restriction, including without limitation the rights }
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
`
repo, err := OpenRepository("../../.git")
assert.NoError(t, err)
defer repo.Close() defer repo.Close()
testBlob, err := repo.GetBlob("a8d4b49dd073a4a38a7e58385eeff7cc52568697") testBlob, err := repo.GetBlob("6c493ff740f9380390d5c9ddef4af18697ac9375")
assert.NoError(t, err) assert.NoError(t, err)
r, err := testBlob.DataAsync() r, err := testBlob.DataAsync()
@@ -53,13 +37,14 @@ THE SOFTWARE.
} }
func Benchmark_Blob_Data(b *testing.B) { func Benchmark_Blob_Data(b *testing.B) {
repo, err := OpenRepository("../../.git") bareRepo1Path := filepath.Join(testReposDir, "repo1_bare")
repo, err := OpenRepository(bareRepo1Path)
if err != nil { if err != nil {
b.Fatal(err) b.Fatal(err)
} }
defer repo.Close() defer repo.Close()
testBlob, err := repo.GetBlob("a8d4b49dd073a4a38a7e58385eeff7cc52568697") testBlob, err := repo.GetBlob("6c493ff740f9380390d5c9ddef4af18697ac9375")
if err != nil { if err != nil {
b.Fatal(err) b.Fatal(err)
} }

View File

@@ -102,10 +102,13 @@ func (tes Entries) GetCommitsInfo(commit *Commit, treePath string, cache *LastCo
} }
func getLastCommitForPathsByCache(commitID, treePath string, paths []string, cache *LastCommitCache) (map[string]*Commit, []string, error) { func getLastCommitForPathsByCache(commitID, treePath string, paths []string, cache *LastCommitCache) (map[string]*Commit, []string, error) {
wr, rd, cancel := CatFileBatch(cache.repo.Path)
defer cancel()
var unHitEntryPaths []string var unHitEntryPaths []string
var results = make(map[string]*Commit) var results = make(map[string]*Commit)
for _, p := range paths { for _, p := range paths {
lastCommit, err := cache.Get(commitID, path.Join(treePath, p)) lastCommit, err := cache.Get(commitID, path.Join(treePath, p), wr, rd)
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
@@ -300,7 +303,7 @@ revListLoop:
commits[0] = string(commitID) commits[0] = string(commitID)
} }
} }
treeID = to40ByteSHA(treeID) treeID = To40ByteSHA(treeID)
_, err = batchStdinWriter.Write(treeID) _, err = batchStdinWriter.Write(treeID)
if err != nil { if err != nil {
return nil, err return nil, err

View File

@@ -47,7 +47,7 @@ func GetRawDiffForFile(repoPath, startCommit, endCommit string, diffType RawDiff
func GetRepoRawDiffForFile(repo *Repository, startCommit, endCommit string, diffType RawDiffType, file string, writer io.Writer) error { func GetRepoRawDiffForFile(repo *Repository, startCommit, endCommit string, diffType RawDiffType, file string, writer io.Writer) error {
commit, err := repo.GetCommit(endCommit) commit, err := repo.GetCommit(endCommit)
if err != nil { if err != nil {
return fmt.Errorf("GetCommit: %v", err) return err
} }
fileArgs := make([]string, 0) fileArgs := make([]string, 0)
if len(file) > 0 { if len(file) > 0 {

View File

@@ -7,6 +7,8 @@
package git package git
import ( import (
"bufio"
"io"
"path" "path"
) )
@@ -34,7 +36,7 @@ func NewLastCommitCache(repoPath string, gitRepo *Repository, ttl func() int64,
} }
// Get get the last commit information by commit id and entry path // Get get the last commit information by commit id and entry path
func (c *LastCommitCache) Get(ref, entryPath string) (interface{}, error) { func (c *LastCommitCache) Get(ref, entryPath string, wr *io.PipeWriter, rd *bufio.Reader) (interface{}, error) {
v := c.cache.Get(c.getCacheKey(c.repoPath, ref, entryPath)) v := c.cache.Get(c.getCacheKey(c.repoPath, ref, entryPath))
if vs, ok := v.(string); ok { if vs, ok := v.(string); ok {
log("LastCommitCache hit level 1: [%s:%s:%s]", ref, entryPath, vs) log("LastCommitCache hit level 1: [%s:%s:%s]", ref, entryPath, vs)
@@ -46,7 +48,10 @@ func (c *LastCommitCache) Get(ref, entryPath string) (interface{}, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
commit, err := c.repo.getCommit(id) if _, err := wr.Write([]byte(vs + "\n")); err != nil {
return nil, err
}
commit, err := c.repo.getCommitFromBatchReader(rd, id)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@@ -8,6 +8,7 @@ package git
import ( import (
"io/ioutil" "io/ioutil"
"strings"
) )
// GetNote retrieves the git-notes data for a given commit. // GetNote retrieves the git-notes data for a given commit.
@@ -49,7 +50,13 @@ func GetNote(repo *Repository, commitID string, note *Note) error {
} }
note.Message = d note.Message = d
lastCommits, err := GetLastCommitForPaths(notes, "", []string{path}) treePath := ""
if idx := strings.LastIndex(path, "/"); idx > -1 {
treePath = path[:idx]
path = path[idx+1:]
}
lastCommits, err := GetLastCommitForPaths(notes, treePath, []string{path})
if err != nil { if err != nil {
return err return err
} }

View File

@@ -127,11 +127,12 @@ func FindLFSFile(repo *git.Repository, hash git.SHA1) ([]*LFSResult, error) {
case "tree": case "tree":
var n int64 var n int64
for n < size { for n < size {
mode, fname, sha, count, err := git.ParseTreeLine(batchReader, modeBuf, fnameBuf, workingShaBuf) mode, fname, sha20byte, count, err := git.ParseTreeLine(batchReader, modeBuf, fnameBuf, workingShaBuf)
if err != nil { if err != nil {
return nil, err return nil, err
} }
n += int64(count) n += int64(count)
sha := git.To40ByteSHA(sha20byte)
if bytes.Equal(sha, []byte(hashStr)) { if bytes.Equal(sha, []byte(hashStr)) {
result := LFSResult{ result := LFSResult{
Name: curPath + string(fname), Name: curPath + string(fname),

View File

@@ -21,14 +21,7 @@ func (repo *Repository) GetBranchCommitID(name string) (string, error) {
// GetTagCommitID returns last commit ID string of given tag. // GetTagCommitID returns last commit ID string of given tag.
func (repo *Repository) GetTagCommitID(name string) (string, error) { func (repo *Repository) GetTagCommitID(name string) (string, error) {
stdout, err := NewCommand("rev-list", "-n", "1", TagPrefix+name).RunInDir(repo.Path) return repo.GetRefCommitID(TagPrefix + name)
if err != nil {
if strings.Contains(err.Error(), "unknown revision or path") {
return "", ErrNotExist{name, ""}
}
return "", err
}
return strings.TrimSpace(stdout), nil
} }
// ConvertToSHA1 returns a Hash object from a potential ID string // ConvertToSHA1 returns a Hash object from a potential ID string

View File

@@ -9,9 +9,10 @@ package git
import ( import (
"bufio" "bufio"
"errors" "errors"
"fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"os"
"path/filepath"
"strings" "strings"
) )
@@ -34,6 +35,18 @@ func (repo *Repository) ResolveReference(name string) (string, error) {
// GetRefCommitID returns the last commit ID string of given reference (branch or tag). // GetRefCommitID returns the last commit ID string of given reference (branch or tag).
func (repo *Repository) GetRefCommitID(name string) (string, error) { func (repo *Repository) GetRefCommitID(name string) (string, error) {
if strings.HasPrefix(name, "refs/") {
// We're gonna try just reading the ref file as this is likely to be quicker than other options
fileInfo, err := os.Lstat(filepath.Join(repo.Path, name))
if err == nil && fileInfo.Mode().IsRegular() && fileInfo.Size() == 41 {
ref, err := ioutil.ReadFile(filepath.Join(repo.Path, name))
if err == nil && SHAPattern.Match(ref[:40]) && ref[40] == '\n' {
return string(ref[:40]), nil
}
}
}
stdout, err := NewCommand("show-ref", "--verify", "--hash", name).RunInDir(repo.Path) stdout, err := NewCommand("show-ref", "--verify", "--hash", name).RunInDir(repo.Path)
if err != nil { if err != nil {
if strings.Contains(err.Error(), "not a valid ref") { if strings.Contains(err.Error(), "not a valid ref") {
@@ -69,6 +82,11 @@ func (repo *Repository) getCommit(id SHA1) (*Commit, error) {
}() }()
bufReader := bufio.NewReader(stdoutReader) bufReader := bufio.NewReader(stdoutReader)
return repo.getCommitFromBatchReader(bufReader, id)
}
func (repo *Repository) getCommitFromBatchReader(bufReader *bufio.Reader, id SHA1) (*Commit, error) {
_, typ, size, err := ReadBatchLine(bufReader) _, typ, size, err := ReadBatchLine(bufReader)
if err != nil { if err != nil {
if errors.Is(err, io.EOF) { if errors.Is(err, io.EOF) {
@@ -106,7 +124,6 @@ func (repo *Repository) getCommit(id SHA1) (*Commit, error) {
case "commit": case "commit":
return CommitFromReader(repo, id, io.LimitReader(bufReader, size)) return CommitFromReader(repo, id, io.LimitReader(bufReader, size))
default: default:
_ = stdoutReader.CloseWithError(fmt.Errorf("unknown typ: %s", typ))
log("Unknown typ: %s", typ) log("Unknown typ: %s", typ)
return nil, ErrNotExist{ return nil, ErrNotExist{
ID: id.String(), ID: id.String(),

View File

@@ -43,7 +43,7 @@ func (repo *Repository) GetLanguageStats(commitID string) (map[string]int64, err
sizes := make(map[string]int64) sizes := make(map[string]int64)
err = tree.Files().ForEach(func(f *object.File) error { err = tree.Files().ForEach(func(f *object.File) error {
if f.Size == 0 || enry.IsVendor(f.Name) || enry.IsDotFile(f.Name) || if f.Size == 0 || analyze.IsVendor(f.Name) || enry.IsDotFile(f.Name) ||
enry.IsDocumentation(f.Name) || enry.IsConfiguration(f.Name) { enry.IsDocumentation(f.Name) || enry.IsConfiguration(f.Name) {
return nil return nil
} }

View File

@@ -67,7 +67,7 @@ func (repo *Repository) GetLanguageStats(commitID string) (map[string]int64, err
for _, f := range entries { for _, f := range entries {
contentBuf.Reset() contentBuf.Reset()
content = contentBuf.Bytes() content = contentBuf.Bytes()
if f.Size() == 0 || enry.IsVendor(f.Name()) || enry.IsDotFile(f.Name()) || if f.Size() == 0 || analyze.IsVendor(f.Name()) || enry.IsDotFile(f.Name()) ||
enry.IsDocumentation(f.Name()) || enry.IsConfiguration(f.Name()) { enry.IsDocumentation(f.Name()) || enry.IsConfiguration(f.Name()) {
continue continue
} }

View File

@@ -7,6 +7,7 @@ package gitgraph
import ( import (
"bytes" "bytes"
"fmt" "fmt"
"strings"
"code.gitea.io/gitea/models" "code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/git" "code.gitea.io/gitea/modules/git"
@@ -216,10 +217,10 @@ func newRefsFromRefNames(refNames []byte) []git.Reference {
continue continue
} }
refName := string(refNameBytes) refName := string(refNameBytes)
if refName[0:5] == "tag: " { if strings.HasPrefix(refName, "tag: ") {
refName = refName[5:] refName = strings.TrimPrefix(refName, "tag: ")
} else if refName[0:8] == "HEAD -> " { } else if strings.HasPrefix(refName, "HEAD -> ") {
refName = refName[8:] refName = strings.TrimPrefix(refName, "HEAD -> ")
} }
refs = append(refs, git.Reference{ refs = append(refs, git.Reference{
Name: refName, Name: refName,

View File

@@ -68,17 +68,21 @@ func (g *Manager) start() {
// Set the running state // Set the running state
g.setState(stateRunning) g.setState(stateRunning)
if skip, _ := strconv.ParseBool(os.Getenv("SKIP_MINWINSVC")); skip { if skip, _ := strconv.ParseBool(os.Getenv("SKIP_MINWINSVC")); skip {
log.Trace("Skipping SVC check as SKIP_MINWINSVC is set")
return return
} }
// Make SVC process // Make SVC process
run := svc.Run run := svc.Run
isInteractive, err := svc.IsWindowsService()
//lint:ignore SA1019 We use IsAnInteractiveSession because IsWindowsService has a different permissions profile
isAnInteractiveSession, err := svc.IsAnInteractiveSession()
if err != nil { if err != nil {
log.Error("Unable to ascertain if running as an Interactive Session: %v", err) log.Error("Unable to ascertain if running as an Windows Service: %v", err)
return return
} }
if isInteractive { if isAnInteractiveSession {
log.Trace("Not running a service ... using the debug SVC manager")
run = debug.Run run = debug.Run
} }
go func() { go func() {
@@ -94,38 +98,49 @@ func (g *Manager) Execute(args []string, changes <-chan svc.ChangeRequest, statu
status <- svc.Status{State: svc.StartPending, WaitHint: uint32(setting.StartupTimeout / time.Millisecond)} status <- svc.Status{State: svc.StartPending, WaitHint: uint32(setting.StartupTimeout / time.Millisecond)}
} }
log.Trace("Awaiting server start-up")
// Now need to wait for everything to start... // Now need to wait for everything to start...
if !g.awaitServer(setting.StartupTimeout) { if !g.awaitServer(setting.StartupTimeout) {
log.Trace("... start-up failed ... Stopped")
return false, 1 return false, 1
} }
log.Trace("Sending Running state to SVC")
// We need to implement some way of svc.AcceptParamChange/svc.ParamChange // We need to implement some way of svc.AcceptParamChange/svc.ParamChange
status <- svc.Status{ status <- svc.Status{
State: svc.Running, State: svc.Running,
Accepts: svc.AcceptStop | svc.AcceptShutdown | acceptHammerCode, Accepts: svc.AcceptStop | svc.AcceptShutdown | acceptHammerCode,
} }
log.Trace("Started")
waitTime := 30 * time.Second waitTime := 30 * time.Second
loop: loop:
for { for {
select { select {
case <-g.ctx.Done(): case <-g.ctx.Done():
log.Trace("Shutting down")
g.DoGracefulShutdown() g.DoGracefulShutdown()
waitTime += setting.GracefulHammerTime waitTime += setting.GracefulHammerTime
break loop break loop
case <-g.shutdownRequested: case <-g.shutdownRequested:
log.Trace("Shutting down")
waitTime += setting.GracefulHammerTime waitTime += setting.GracefulHammerTime
break loop break loop
case change := <-changes: case change := <-changes:
switch change.Cmd { switch change.Cmd {
case svc.Interrogate: case svc.Interrogate:
log.Trace("SVC sent interrogate")
status <- change.CurrentStatus status <- change.CurrentStatus
case svc.Stop, svc.Shutdown: case svc.Stop, svc.Shutdown:
log.Trace("SVC requested shutdown - shutting down")
g.DoGracefulShutdown() g.DoGracefulShutdown()
waitTime += setting.GracefulHammerTime waitTime += setting.GracefulHammerTime
break loop break loop
case hammerCode: case hammerCode:
log.Trace("SVC requested hammer - shutting down and hammering immediately")
g.DoGracefulShutdown() g.DoGracefulShutdown()
g.DoImmediateHammer() g.DoImmediateHammer()
break loop break loop
@@ -134,6 +149,8 @@ loop:
} }
} }
} }
log.Trace("Sending StopPending state to SVC")
status <- svc.Status{ status <- svc.Status{
State: svc.StopPending, State: svc.StopPending,
WaitHint: uint32(waitTime / time.Millisecond), WaitHint: uint32(waitTime / time.Millisecond),
@@ -145,8 +162,10 @@ hammerLoop:
case change := <-changes: case change := <-changes:
switch change.Cmd { switch change.Cmd {
case svc.Interrogate: case svc.Interrogate:
log.Trace("SVC sent interrogate")
status <- change.CurrentStatus status <- change.CurrentStatus
case svc.Stop, svc.Shutdown, hammerCmd: case svc.Stop, svc.Shutdown, hammerCmd:
log.Trace("SVC requested hammer - hammering immediately")
g.DoImmediateHammer() g.DoImmediateHammer()
break hammerLoop break hammerLoop
default: default:
@@ -156,6 +175,8 @@ hammerLoop:
break hammerLoop break hammerLoop
} }
} }
log.Trace("Stopped")
return false, 0 return false, 0
} }

View File

@@ -10,6 +10,7 @@ import (
"net/http" "net/http"
"os" "os"
"strconv" "strconv"
"strings"
"time" "time"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
@@ -26,11 +27,13 @@ func GetCacheControl() string {
// generateETag generates an ETag based on size, filename and file modification time // generateETag generates an ETag based on size, filename and file modification time
func generateETag(fi os.FileInfo) string { func generateETag(fi os.FileInfo) string {
etag := fmt.Sprint(fi.Size()) + fi.Name() + fi.ModTime().UTC().Format(http.TimeFormat) etag := fmt.Sprint(fi.Size()) + fi.Name() + fi.ModTime().UTC().Format(http.TimeFormat)
return base64.StdEncoding.EncodeToString([]byte(etag)) return `"` + base64.StdEncoding.EncodeToString([]byte(etag)) + `"`
} }
// HandleTimeCache handles time-based caching for a HTTP request // HandleTimeCache handles time-based caching for a HTTP request
func HandleTimeCache(req *http.Request, w http.ResponseWriter, fi os.FileInfo) (handled bool) { func HandleTimeCache(req *http.Request, w http.ResponseWriter, fi os.FileInfo) (handled bool) {
w.Header().Set("Cache-Control", GetCacheControl())
ifModifiedSince := req.Header.Get("If-Modified-Since") ifModifiedSince := req.Header.Get("If-Modified-Since")
if ifModifiedSince != "" { if ifModifiedSince != "" {
t, err := time.Parse(http.TimeFormat, ifModifiedSince) t, err := time.Parse(http.TimeFormat, ifModifiedSince)
@@ -40,20 +43,40 @@ func HandleTimeCache(req *http.Request, w http.ResponseWriter, fi os.FileInfo) (
} }
} }
w.Header().Set("Cache-Control", GetCacheControl())
w.Header().Set("Last-Modified", fi.ModTime().Format(http.TimeFormat)) w.Header().Set("Last-Modified", fi.ModTime().Format(http.TimeFormat))
return false return false
} }
// HandleEtagCache handles ETag-based caching for a HTTP request // HandleFileETagCache handles ETag-based caching for a HTTP request
func HandleEtagCache(req *http.Request, w http.ResponseWriter, fi os.FileInfo) (handled bool) { func HandleFileETagCache(req *http.Request, w http.ResponseWriter, fi os.FileInfo) (handled bool) {
etag := generateETag(fi) etag := generateETag(fi)
if req.Header.Get("If-None-Match") == etag { return HandleGenericETagCache(req, w, etag)
}
// HandleGenericETagCache handles ETag-based caching for a HTTP request.
// It returns true if the request was handled.
func HandleGenericETagCache(req *http.Request, w http.ResponseWriter, etag string) (handled bool) {
if len(etag) > 0 {
w.Header().Set("Etag", etag)
if checkIfNoneMatchIsValid(req, etag) {
w.WriteHeader(http.StatusNotModified) w.WriteHeader(http.StatusNotModified)
return true return true
} }
}
w.Header().Set("Cache-Control", GetCacheControl()) w.Header().Set("Cache-Control", GetCacheControl())
w.Header().Set("ETag", etag) return false
}
// checkIfNoneMatchIsValid tests if the header If-None-Match matches the ETag
func checkIfNoneMatchIsValid(req *http.Request, etag string) bool {
ifNoneMatch := req.Header.Get("If-None-Match")
if len(ifNoneMatch) > 0 {
for _, item := range strings.Split(ifNoneMatch, ",") {
item = strings.TrimSpace(item)
if item == etag {
return true
}
}
}
return false return false
} }

View File

@@ -0,0 +1,144 @@
// Copyright 2021 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package httpcache
import (
"net/http"
"net/http/httptest"
"os"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
type mockFileInfo struct {
}
func (m mockFileInfo) Name() string { return "gitea.test" }
func (m mockFileInfo) Size() int64 { return int64(10) }
func (m mockFileInfo) Mode() os.FileMode { return os.ModePerm }
func (m mockFileInfo) ModTime() time.Time { return time.Time{} }
func (m mockFileInfo) IsDir() bool { return false }
func (m mockFileInfo) Sys() interface{} { return nil }
func TestHandleFileETagCache(t *testing.T) {
fi := mockFileInfo{}
etag := `"MTBnaXRlYS50ZXN0TW9uLCAwMSBKYW4gMDAwMSAwMDowMDowMCBHTVQ="`
t.Run("No_If-None-Match", func(t *testing.T) {
req := &http.Request{Header: make(http.Header)}
w := httptest.NewRecorder()
handled := HandleFileETagCache(req, w, fi)
assert.False(t, handled)
assert.Len(t, w.Header(), 2)
assert.Contains(t, w.Header(), "Cache-Control")
assert.Contains(t, w.Header(), "Etag")
assert.Equal(t, etag, w.Header().Get("Etag"))
})
t.Run("Wrong_If-None-Match", func(t *testing.T) {
req := &http.Request{Header: make(http.Header)}
w := httptest.NewRecorder()
req.Header.Set("If-None-Match", `"wrong etag"`)
handled := HandleFileETagCache(req, w, fi)
assert.False(t, handled)
assert.Len(t, w.Header(), 2)
assert.Contains(t, w.Header(), "Cache-Control")
assert.Contains(t, w.Header(), "Etag")
assert.Equal(t, etag, w.Header().Get("Etag"))
})
t.Run("Correct_If-None-Match", func(t *testing.T) {
req := &http.Request{Header: make(http.Header)}
w := httptest.NewRecorder()
req.Header.Set("If-None-Match", etag)
handled := HandleFileETagCache(req, w, fi)
assert.True(t, handled)
assert.Len(t, w.Header(), 1)
assert.Contains(t, w.Header(), "Etag")
assert.Equal(t, etag, w.Header().Get("Etag"))
assert.Equal(t, http.StatusNotModified, w.Code)
})
}
func TestHandleGenericETagCache(t *testing.T) {
etag := `"test"`
t.Run("No_If-None-Match", func(t *testing.T) {
req := &http.Request{Header: make(http.Header)}
w := httptest.NewRecorder()
handled := HandleGenericETagCache(req, w, etag)
assert.False(t, handled)
assert.Len(t, w.Header(), 2)
assert.Contains(t, w.Header(), "Cache-Control")
assert.Contains(t, w.Header(), "Etag")
assert.Equal(t, etag, w.Header().Get("Etag"))
})
t.Run("Wrong_If-None-Match", func(t *testing.T) {
req := &http.Request{Header: make(http.Header)}
w := httptest.NewRecorder()
req.Header.Set("If-None-Match", `"wrong etag"`)
handled := HandleGenericETagCache(req, w, etag)
assert.False(t, handled)
assert.Len(t, w.Header(), 2)
assert.Contains(t, w.Header(), "Cache-Control")
assert.Contains(t, w.Header(), "Etag")
assert.Equal(t, etag, w.Header().Get("Etag"))
})
t.Run("Correct_If-None-Match", func(t *testing.T) {
req := &http.Request{Header: make(http.Header)}
w := httptest.NewRecorder()
req.Header.Set("If-None-Match", etag)
handled := HandleGenericETagCache(req, w, etag)
assert.True(t, handled)
assert.Len(t, w.Header(), 1)
assert.Contains(t, w.Header(), "Etag")
assert.Equal(t, etag, w.Header().Get("Etag"))
assert.Equal(t, http.StatusNotModified, w.Code)
})
t.Run("Multiple_Wrong_If-None-Match", func(t *testing.T) {
req := &http.Request{Header: make(http.Header)}
w := httptest.NewRecorder()
req.Header.Set("If-None-Match", `"wrong etag", "wrong etag "`)
handled := HandleGenericETagCache(req, w, etag)
assert.False(t, handled)
assert.Len(t, w.Header(), 2)
assert.Contains(t, w.Header(), "Cache-Control")
assert.Contains(t, w.Header(), "Etag")
assert.Equal(t, etag, w.Header().Get("Etag"))
})
t.Run("Multiple_Correct_If-None-Match", func(t *testing.T) {
req := &http.Request{Header: make(http.Header)}
w := httptest.NewRecorder()
req.Header.Set("If-None-Match", `"wrong etag", `+etag)
handled := HandleGenericETagCache(req, w, etag)
assert.True(t, handled)
assert.Len(t, w.Header(), 1)
assert.Contains(t, w.Header(), "Etag")
assert.Equal(t, etag, w.Header().Get("Etag"))
assert.Equal(t, http.StatusNotModified, w.Code)
})
}

View File

@@ -325,7 +325,7 @@ func (r *Request) getResponse() (*http.Response, error) {
trans = &http.Transport{ trans = &http.Transport{
TLSClientConfig: r.setting.TLSClientConfig, TLSClientConfig: r.setting.TLSClientConfig,
Proxy: proxy, Proxy: proxy,
Dial: TimeoutDialer(r.setting.ConnectTimeout, r.setting.ReadWriteTimeout), Dial: TimeoutDialer(r.setting.ConnectTimeout),
} }
} else if t, ok := trans.(*http.Transport); ok { } else if t, ok := trans.(*http.Transport); ok {
if t.TLSClientConfig == nil { if t.TLSClientConfig == nil {
@@ -335,7 +335,7 @@ func (r *Request) getResponse() (*http.Response, error) {
t.Proxy = r.setting.Proxy t.Proxy = r.setting.Proxy
} }
if t.Dial == nil { if t.Dial == nil {
t.Dial = TimeoutDialer(r.setting.ConnectTimeout, r.setting.ReadWriteTimeout) t.Dial = TimeoutDialer(r.setting.ConnectTimeout)
} }
} }
@@ -352,6 +352,7 @@ func (r *Request) getResponse() (*http.Response, error) {
client := &http.Client{ client := &http.Client{
Transport: trans, Transport: trans,
Jar: jar, Jar: jar,
Timeout: r.setting.ReadWriteTimeout,
} }
if len(r.setting.UserAgent) > 0 && len(r.req.Header.Get("User-Agent")) == 0 { if len(r.setting.UserAgent) > 0 && len(r.req.Header.Get("User-Agent")) == 0 {
@@ -457,12 +458,12 @@ func (r *Request) Response() (*http.Response, error) {
} }
// TimeoutDialer returns functions of connection dialer with timeout settings for http.Transport Dial field. // TimeoutDialer returns functions of connection dialer with timeout settings for http.Transport Dial field.
func TimeoutDialer(cTimeout time.Duration, rwTimeout time.Duration) func(net, addr string) (c net.Conn, err error) { func TimeoutDialer(cTimeout time.Duration) func(net, addr string) (c net.Conn, err error) {
return func(netw, addr string) (net.Conn, error) { return func(netw, addr string) (net.Conn, error) {
conn, err := net.DialTimeout(netw, addr, cTimeout) conn, err := net.DialTimeout(netw, addr, cTimeout)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return conn, conn.SetDeadline(time.Now().Add(rwTimeout)) return conn, nil
} }
} }

View File

@@ -178,7 +178,7 @@ func NewBleveIndexer(indexDir string) (*BleveIndexer, bool, error) {
func (b *BleveIndexer) addUpdate(batchWriter *io.PipeWriter, batchReader *bufio.Reader, commitSha string, update fileUpdate, repo *models.Repository, batch rupture.FlushingBatch) error { func (b *BleveIndexer) addUpdate(batchWriter *io.PipeWriter, batchReader *bufio.Reader, commitSha string, update fileUpdate, repo *models.Repository, batch rupture.FlushingBatch) error {
// Ignore vendored files in code search // Ignore vendored files in code search
if setting.Indexer.ExcludeVendored && enry.IsVendor(update.Filename) { if setting.Indexer.ExcludeVendored && analyze.IsVendor(update.Filename) {
return nil return nil
} }

View File

@@ -177,7 +177,7 @@ func (b *ElasticSearchIndexer) init() (bool, error) {
func (b *ElasticSearchIndexer) addUpdate(batchWriter *io.PipeWriter, batchReader *bufio.Reader, sha string, update fileUpdate, repo *models.Repository) ([]elastic.BulkableRequest, error) { func (b *ElasticSearchIndexer) addUpdate(batchWriter *io.PipeWriter, batchReader *bufio.Reader, sha string, update fileUpdate, repo *models.Repository) ([]elastic.BulkableRequest, error) {
// Ignore vendored files in code search // Ignore vendored files in code search
if setting.Indexer.ExcludeVendored && enry.IsVendor(update.Filename) { if setting.Indexer.ExcludeVendored && analyze.IsVendor(update.Filename) {
return nil, nil return nil, nil
} }

View File

@@ -38,7 +38,11 @@ func (db *DBIndexer) Index(id int64) error {
// Get latest commit for default branch // Get latest commit for default branch
commitID, err := gitRepo.GetBranchCommitID(repo.DefaultBranch) commitID, err := gitRepo.GetBranchCommitID(repo.DefaultBranch)
if err != nil { if err != nil {
log.Error("Unable to get commit ID for defaultbranch %s in %s", repo.DefaultBranch, repo.RepoPath()) if git.IsErrBranchNotExist(err) || git.IsErrNotExist((err)) {
log.Debug("Unable to get commit ID for defaultbranch %s in %s ... skipping this repository", repo.DefaultBranch, repo.RepoPath())
return nil
}
log.Error("Unable to get commit ID for defaultbranch %s in %s. Error: %v", repo.DefaultBranch, repo.RepoPath(), err)
return err return err
} }

View File

@@ -44,24 +44,13 @@ type ContentStore struct {
} }
// Get takes a Meta object and retrieves the content from the store, returning // Get takes a Meta object and retrieves the content from the store, returning
// it as an io.Reader. If fromByte > 0, the reader starts from that byte // it as an io.ReadSeekCloser.
func (s *ContentStore) Get(meta *models.LFSMetaObject, fromByte int64) (io.ReadCloser, error) { func (s *ContentStore) Get(meta *models.LFSMetaObject) (storage.Object, error) {
f, err := s.Open(meta.RelativePath()) f, err := s.Open(meta.RelativePath())
if err != nil { if err != nil {
log.Error("Whilst trying to read LFS OID[%s]: Unable to open Error: %v", meta.Oid, err) log.Error("Whilst trying to read LFS OID[%s]: Unable to open Error: %v", meta.Oid, err)
return nil, err return nil, err
} }
if fromByte > 0 {
if fromByte >= meta.Size {
return nil, ErrRangeNotSatisfiable{
FromByte: fromByte,
}
}
_, err = f.Seek(fromByte, io.SeekStart)
if err != nil {
log.Error("Whilst trying to read LFS OID[%s]: Unable to seek to %d Error: %v", meta.Oid, fromByte, err)
}
}
return f, err return f, err
} }
@@ -74,7 +63,7 @@ func (s *ContentStore) Put(meta *models.LFSMetaObject, r io.Reader) error {
// now pass the wrapped reader to Save - if there is a size mismatch or hash mismatch then // now pass the wrapped reader to Save - if there is a size mismatch or hash mismatch then
// the errors returned by the newHashingReader should percolate up to here // the errors returned by the newHashingReader should percolate up to here
written, err := s.Save(p, wrappedRd) written, err := s.Save(p, wrappedRd, meta.Size)
if err != nil { if err != nil {
log.Error("Whilst putting LFS OID[%s]: Failed to copy to tmpPath: %s Error: %v", meta.Oid, p, err) log.Error("Whilst putting LFS OID[%s]: Failed to copy to tmpPath: %s Error: %v", meta.Oid, p, err)
return err return err

View File

@@ -67,5 +67,5 @@ func IsPointerFile(buf *[]byte) *models.LFSMetaObject {
// ReadMetaObject will read a models.LFSMetaObject and return a reader // ReadMetaObject will read a models.LFSMetaObject and return a reader
func ReadMetaObject(meta *models.LFSMetaObject) (io.ReadCloser, error) { func ReadMetaObject(meta *models.LFSMetaObject) (io.ReadCloser, error) {
contentStore := &ContentStore{ObjectStorage: storage.LFS} contentStore := &ContentStore{ObjectStorage: storage.LFS}
return contentStore.Get(meta, 0) return contentStore.Get(meta)
} }

View File

@@ -175,6 +175,11 @@ func getContentHandler(ctx *context.Context) {
statusCode = 206 statusCode = 206
fromByte, _ = strconv.ParseInt(match[1], 10, 32) fromByte, _ = strconv.ParseInt(match[1], 10, 32)
if fromByte >= meta.Size {
writeStatus(ctx, http.StatusRequestedRangeNotSatisfiable)
return
}
if match[2] != "" { if match[2] != "" {
_toByte, _ := strconv.ParseInt(match[2], 10, 32) _toByte, _ := strconv.ParseInt(match[2], 10, 32)
if _toByte >= fromByte && _toByte < toByte { if _toByte >= fromByte && _toByte < toByte {
@@ -188,18 +193,24 @@ func getContentHandler(ctx *context.Context) {
} }
contentStore := &ContentStore{ObjectStorage: storage.LFS} contentStore := &ContentStore{ObjectStorage: storage.LFS}
content, err := contentStore.Get(meta, fromByte) content, err := contentStore.Get(meta)
if err != nil { if err != nil {
if IsErrRangeNotSatisfiable(err) {
writeStatus(ctx, http.StatusRequestedRangeNotSatisfiable)
} else {
// Errors are logged in contentStore.Get // Errors are logged in contentStore.Get
writeStatus(ctx, 404) writeStatus(ctx, http.StatusNotFound)
}
return return
} }
defer content.Close() defer content.Close()
if fromByte > 0 {
_, err = content.Seek(fromByte, io.SeekStart)
if err != nil {
log.Error("Whilst trying to read LFS OID[%s]: Unable to seek to %d Error: %v", meta.Oid, fromByte, err)
writeStatus(ctx, http.StatusInternalServerError)
return
}
}
contentLength := toByte + 1 - fromByte contentLength := toByte + 1 - fromByte
ctx.Resp.Header().Set("Content-Length", strconv.FormatInt(contentLength, 10)) ctx.Resp.Header().Set("Content-Length", strconv.FormatInt(contentLength, 10))
ctx.Resp.Header().Set("Content-Type", "application/octet-stream") ctx.Resp.Header().Set("Content-Type", "application/octet-stream")

View File

@@ -313,7 +313,7 @@ func RenderEmoji(
return ctx.postProcess(rawHTML) return ctx.postProcess(rawHTML)
} }
var tagCleaner = regexp.MustCompile(`<((?:/?\w+/\w+)|(?:/[\w ]+/)|(/?[hH][tT][mM][lL][ />])|(/?[hH][eE][aA][dD][ />]))`) var tagCleaner = regexp.MustCompile(`<((?:/?\w+/\w+)|(?:/[\w ]+/)|(/?[hH][tT][mM][lL]\b)|(/?[hH][eE][aA][dD]\b))`)
var nulCleaner = strings.NewReplacer("\000", "") var nulCleaner = strings.NewReplacer("\000", "")
func (ctx *postProcessCtx) postProcess(rawHTML []byte) ([]byte, error) { func (ctx *postProcessCtx) postProcess(rawHTML []byte) ([]byte, error) {
@@ -327,7 +327,7 @@ func (ctx *postProcessCtx) postProcess(rawHTML []byte) ([]byte, error) {
_, _ = res.WriteString("<html><body>") _, _ = res.WriteString("<html><body>")
// Strip out nuls - they're always invalid // Strip out nuls - they're always invalid
_, _ = nulCleaner.WriteString(res, string(tagCleaner.ReplaceAll(rawHTML, []byte("&lt;$1")))) _, _ = res.Write(tagCleaner.ReplaceAll([]byte(nulCleaner.Replace(string(rawHTML))), []byte("&lt;$1")))
// close the tags // close the tags
_, _ = res.WriteString("</body></html>") _, _ = res.WriteString("</body></html>")

View File

@@ -124,7 +124,7 @@ func TestRender_links(t *testing.T) {
`<p><a href="http://www.example.com/wpstyle/?p=364" rel="nofollow">http://www.example.com/wpstyle/?p=364</a></p>`) `<p><a href="http://www.example.com/wpstyle/?p=364" rel="nofollow">http://www.example.com/wpstyle/?p=364</a></p>`)
test( test(
"https://www.example.com/foo/?bar=baz&inga=42&quux", "https://www.example.com/foo/?bar=baz&inga=42&quux",
`<p><a href="https://www.example.com/foo/?bar=baz&inga=42&quux=" rel="nofollow">https://www.example.com/foo/?bar=baz&amp;inga=42&amp;quux</a></p>`) `<p><a href="https://www.example.com/foo/?bar=baz&inga=42&quux" rel="nofollow">https://www.example.com/foo/?bar=baz&amp;inga=42&amp;quux</a></p>`)
test( test(
"http://142.42.1.1/", "http://142.42.1.1/",
`<p><a href="http://142.42.1.1/" rel="nofollow">http://142.42.1.1/</a></p>`) `<p><a href="http://142.42.1.1/" rel="nofollow">http://142.42.1.1/</a></p>`)

View File

@@ -46,7 +46,9 @@ func ReplaceSanitizer() {
sanitizer.policy.AllowAttrs("checked", "disabled").OnElements("input") sanitizer.policy.AllowAttrs("checked", "disabled").OnElements("input")
// Custom URL-Schemes // Custom URL-Schemes
if len(setting.Markdown.CustomURLSchemes) > 0 {
sanitizer.policy.AllowURLSchemes(setting.Markdown.CustomURLSchemes...) sanitizer.policy.AllowURLSchemes(setting.Markdown.CustomURLSchemes...)
}
// Allow keyword markup // Allow keyword markup
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`^` + keywordClass + `$`)).OnElements("span") sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`^` + keywordClass + `$`)).OnElements("span")

View File

@@ -6,6 +6,8 @@
package markup package markup
import ( import (
"html/template"
"strings"
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -50,3 +52,13 @@ func Test_Sanitizer(t *testing.T) {
assert.Equal(t, testCases[i+1], string(SanitizeBytes([]byte(testCases[i])))) assert.Equal(t, testCases[i+1], string(SanitizeBytes([]byte(testCases[i]))))
} }
} }
func TestSanitizeNonEscape(t *testing.T) {
descStr := "<scrİpt>&lt;script&gt;alert(document.domain)&lt;/script&gt;</scrİpt>"
output := template.HTML(Sanitize(string(descStr)))
if strings.Contains(string(output), "<script>") {
t.Errorf("un-escaped <script> in output: %q", output)
}
}

View File

@@ -525,9 +525,6 @@ func (g *GiteaDownloader) GetPullRequests(page, perPage int) ([]*base.PullReques
headRepoName = pr.Head.Repository.Name headRepoName = pr.Head.Repository.Name
headCloneURL = pr.Head.Repository.CloneURL headCloneURL = pr.Head.Repository.CloneURL
} }
if err := fixPullHeadSha(g.client, pr); err != nil {
return nil, false, fmt.Errorf("error while resolving head git ref: %s for pull #%d. Error: %v", pr.Head.Ref, pr.Index, err)
}
headSHA = pr.Head.Sha headSHA = pr.Head.Sha
headRef = pr.Head.Ref headRef = pr.Head.Ref
} }
@@ -679,22 +676,3 @@ func (g *GiteaDownloader) GetReviews(index int64) ([]*base.Review, error) {
} }
return allReviews, nil return allReviews, nil
} }
// fixPullHeadSha is a workaround for https://github.com/go-gitea/gitea/issues/12675
// When no head sha is available, this is because the branch got deleted in the base repo.
// pr.Head.Ref points in this case not to the head repo branch name, but the base repo ref,
// which stays available to resolve the commit sha.
func fixPullHeadSha(client *gitea_sdk.Client, pr *gitea_sdk.PullRequest) error {
owner := pr.Base.Repository.Owner.UserName
repo := pr.Base.Repository.Name
if pr.Head != nil && pr.Head.Sha == "" {
refs, _, err := client.GetRepoRefs(owner, repo, pr.Head.Ref)
if err != nil {
return err
} else if len(refs) == 0 {
return fmt.Errorf("unable to resolve PR ref '%s'", pr.Head.Ref)
}
pr.Head.Sha = refs[0].Object.SHA
}
return nil
}

View File

@@ -283,7 +283,7 @@ func (g *GiteaLocalUploader) CreateReleases(releases ...*base.Release) error {
} }
} }
defer rc.Close() defer rc.Close()
_, err = storage.Attachments.Save(attach.RelativePath(), rc) _, err = storage.Attachments.Save(attach.RelativePath(), rc, int64(*asset.Size))
return err return err
}() }()
if err != nil { if err != nil {

View File

@@ -132,6 +132,11 @@ func (g *GithubDownloaderV3) sleep() {
func (g *GithubDownloaderV3) RefreshRate() error { func (g *GithubDownloaderV3) RefreshRate() error {
rates, _, err := g.client.RateLimits(g.ctx) rates, _, err := g.client.RateLimits(g.ctx)
if err != nil { if err != nil {
// if rate limit is not enabled, ignore it
if strings.Contains(err.Error(), "404") {
g.rate = nil
return nil
}
return err return err
} }

View File

@@ -152,7 +152,7 @@ func (m *Manager) GetRedisClient(connection string) redis.UniversalClient {
opts.Addrs = append(opts.Addrs, strings.Split(uri.Host, ",")...) opts.Addrs = append(opts.Addrs, strings.Split(uri.Host, ",")...)
} }
if uri.Path != "" { if uri.Path != "" {
if db, err := strconv.Atoi(uri.Path); err == nil { if db, err := strconv.Atoi(uri.Path[1:]); err == nil {
opts.DB = db opts.DB = db
} }
} }
@@ -168,7 +168,7 @@ func (m *Manager) GetRedisClient(connection string) redis.UniversalClient {
opts.Addrs = append(opts.Addrs, strings.Split(uri.Host, ",")...) opts.Addrs = append(opts.Addrs, strings.Split(uri.Host, ",")...)
} }
if uri.Path != "" { if uri.Path != "" {
if db, err := strconv.Atoi(uri.Path); err == nil { if db, err := strconv.Atoi(uri.Path[1:]); err == nil {
opts.DB = db opts.DB = db
} }
} }
@@ -186,7 +186,7 @@ func (m *Manager) GetRedisClient(connection string) redis.UniversalClient {
opts.Addrs = append(opts.Addrs, strings.Split(uri.Host, ",")...) opts.Addrs = append(opts.Addrs, strings.Split(uri.Host, ",")...)
} }
if uri.Path != "" { if uri.Path != "" {
if db, err := strconv.Atoi(uri.Path); err == nil { if db, err := strconv.Atoi(uri.Path[1:]); err == nil {
opts.DB = db opts.DB = db
} }
} }

View File

@@ -332,7 +332,8 @@ func (a *actionNotifier) NotifyPushCommits(pusher *models.User, repo *models.Rep
func (a *actionNotifier) NotifyCreateRef(doer *models.User, repo *models.Repository, refType, refFullName string) { func (a *actionNotifier) NotifyCreateRef(doer *models.User, repo *models.Repository, refType, refFullName string) {
opType := models.ActionCommitRepo opType := models.ActionCommitRepo
if refType == "tag" { if refType == "tag" {
opType = models.ActionPushTag // has sent same action in `NotifyPushCommits`, so skip it.
return
} }
if err := models.NotifyWatchers(&models.Action{ if err := models.NotifyWatchers(&models.Action{
ActUserID: doer.ID, ActUserID: doer.ID,
@@ -350,7 +351,8 @@ func (a *actionNotifier) NotifyCreateRef(doer *models.User, repo *models.Reposit
func (a *actionNotifier) NotifyDeleteRef(doer *models.User, repo *models.Repository, refType, refFullName string) { func (a *actionNotifier) NotifyDeleteRef(doer *models.User, repo *models.Repository, refType, refFullName string) {
opType := models.ActionDeleteBranch opType := models.ActionDeleteBranch
if refType == "tag" { if refType == "tag" {
opType = models.ActionDeleteTag // has sent same action in `NotifyPushCommits`, so skip it.
return
} }
if err := models.NotifyWatchers(&models.Action{ if err := models.NotifyWatchers(&models.Action{
ActUserID: doer.ID, ActUserID: doer.ID,

View File

@@ -27,6 +27,7 @@ type Options struct {
// KnownPublicEntries list all direct children in the `public` directory // KnownPublicEntries list all direct children in the `public` directory
var KnownPublicEntries = []string{ var KnownPublicEntries = []string{
"css", "css",
"fonts",
"img", "img",
"js", "js",
"serviceworker.js", "serviceworker.js",
@@ -164,7 +165,7 @@ func (opts *Options) handle(w http.ResponseWriter, req *http.Request, opt *Optio
log.Println("[Static] Serving " + file) log.Println("[Static] Serving " + file)
} }
if httpcache.HandleEtagCache(req, w, fi) { if httpcache.HandleFileETagCache(req, w, fi) {
return true return true
} }

View File

@@ -174,6 +174,7 @@ func (m *Manager) FlushAll(baseCtx context.Context, timeout time.Duration) error
default: default:
} }
mqs := m.ManagedQueues() mqs := m.ManagedQueues()
log.Debug("Found %d Managed Queues", len(mqs))
wg := sync.WaitGroup{} wg := sync.WaitGroup{}
wg.Add(len(mqs)) wg.Add(len(mqs))
allEmpty := true allEmpty := true
@@ -184,6 +185,7 @@ func (m *Manager) FlushAll(baseCtx context.Context, timeout time.Duration) error
} }
allEmpty = false allEmpty = false
if flushable, ok := mq.Managed.(Flushable); ok { if flushable, ok := mq.Managed.(Flushable); ok {
log.Debug("Flushing (flushable) queue: %s", mq.Name)
go func(q *ManagedQueue) { go func(q *ManagedQueue) {
localCtx, localCancel := context.WithCancel(ctx) localCtx, localCancel := context.WithCancel(ctx)
pid := q.RegisterWorkers(1, start, hasTimeout, end, localCancel, true) pid := q.RegisterWorkers(1, start, hasTimeout, end, localCancel, true)
@@ -196,7 +198,11 @@ func (m *Manager) FlushAll(baseCtx context.Context, timeout time.Duration) error
wg.Done() wg.Done()
}(mq) }(mq)
} else { } else {
log.Debug("Queue: %s is non-empty but is not flushable - adding 100 millisecond wait", mq.Name)
go func() {
<-time.After(100 * time.Millisecond)
wg.Done() wg.Done()
}()
} }
} }

View File

@@ -114,41 +114,71 @@ func (q *ByteFIFOQueue) Run(atShutdown, atTerminate func(context.Context, func()
} }
func (q *ByteFIFOQueue) readToChan() { func (q *ByteFIFOQueue) readToChan() {
for { // handle quick cancels
select { select {
case <-q.closed: case <-q.closed:
// tell the pool to shutdown. // tell the pool to shutdown.
q.cancel() q.cancel()
return return
default: default:
q.lock.Lock()
bs, err := q.byteFIFO.Pop()
if err != nil {
q.lock.Unlock()
log.Error("%s: %s Error on Pop: %v", q.typ, q.name, err)
time.Sleep(time.Millisecond * 100)
continue
} }
if len(bs) == 0 { backOffTime := time.Millisecond * 100
q.lock.Unlock() maxBackOffTime := time.Second * 3
time.Sleep(time.Millisecond * 100) for {
continue success, resetBackoff := q.doPop()
if resetBackoff {
backOffTime = 100 * time.Millisecond
} }
if success {
select {
case <-q.closed:
// tell the pool to shutdown.
q.cancel()
return
default:
}
} else {
select {
case <-q.closed:
// tell the pool to shutdown.
q.cancel()
return
case <-time.After(backOffTime):
}
backOffTime += backOffTime / 2
if backOffTime > maxBackOffTime {
backOffTime = maxBackOffTime
}
}
}
}
func (q *ByteFIFOQueue) doPop() (success, resetBackoff bool) {
q.lock.Lock()
defer q.lock.Unlock()
bs, err := q.byteFIFO.Pop()
if err != nil {
log.Error("%s: %s Error on Pop: %v", q.typ, q.name, err)
return
}
if len(bs) == 0 {
return
}
resetBackoff = true
data, err := unmarshalAs(bs, q.exemplar) data, err := unmarshalAs(bs, q.exemplar)
if err != nil { if err != nil {
log.Error("%s: %s Failed to unmarshal with error: %v", q.typ, q.name, err) log.Error("%s: %s Failed to unmarshal with error: %v", q.typ, q.name, err)
q.lock.Unlock() return
time.Sleep(time.Millisecond * 100)
continue
} }
log.Trace("%s %s: Task found: %#v", q.typ, q.name, data) log.Trace("%s %s: Task found: %#v", q.typ, q.name, data)
q.WorkerPool.Push(data) q.WorkerPool.Push(data)
q.lock.Unlock() success = true
} return
}
} }
// Shutdown processing from this queue // Shutdown processing from this queue

View File

@@ -99,42 +99,10 @@ func UploadRepoFiles(repo *models.Repository, doer *models.User, opts *UploadRep
} }
// Copy uploaded files into repository. // Copy uploaded files into repository.
for i, uploadInfo := range infos { for i := range infos {
file, err := os.Open(uploadInfo.upload.LocalPath()) if err := copyUploadedLFSFileIntoRepository(&infos[i], filename2attribute2info, t, opts.TreePath); err != nil {
if err != nil {
return err return err
} }
defer file.Close()
var objectHash string
if setting.LFS.StartServer && filename2attribute2info[uploadInfo.upload.Name] != nil && filename2attribute2info[uploadInfo.upload.Name]["filter"] == "lfs" {
// Handle LFS
// FIXME: Inefficient! this should probably happen in models.Upload
oid, err := models.GenerateLFSOid(file)
if err != nil {
return err
}
fileInfo, err := file.Stat()
if err != nil {
return err
}
uploadInfo.lfsMetaObject = &models.LFSMetaObject{Oid: oid, Size: fileInfo.Size(), RepositoryID: t.repo.ID}
if objectHash, err = t.HashObject(strings.NewReader(uploadInfo.lfsMetaObject.Pointer())); err != nil {
return err
}
infos[i] = uploadInfo
} else if objectHash, err = t.HashObject(file); err != nil {
return err
}
// Add the object to the index
if err := t.AddObjectToIndex("100644", objectHash, path.Join(opts.TreePath, uploadInfo.upload.Name)); err != nil {
return err
}
} }
// Now write the tree // Now write the tree
@@ -154,11 +122,11 @@ func UploadRepoFiles(repo *models.Repository, doer *models.User, opts *UploadRep
} }
// Now deal with LFS objects // Now deal with LFS objects
for _, uploadInfo := range infos { for i := range infos {
if uploadInfo.lfsMetaObject == nil { if infos[i].lfsMetaObject == nil {
continue continue
} }
uploadInfo.lfsMetaObject, err = models.NewLFSMetaObject(uploadInfo.lfsMetaObject) infos[i].lfsMetaObject, err = models.NewLFSMetaObject(infos[i].lfsMetaObject)
if err != nil { if err != nil {
// OK Now we need to cleanup // OK Now we need to cleanup
return cleanUpAfterFailure(&infos, t, err) return cleanUpAfterFailure(&infos, t, err)
@@ -170,28 +138,10 @@ func UploadRepoFiles(repo *models.Repository, doer *models.User, opts *UploadRep
// OK now we can insert the data into the store - there's no way to clean up the store // OK now we can insert the data into the store - there's no way to clean up the store
// once it's in there, it's in there. // once it's in there, it's in there.
contentStore := &lfs.ContentStore{ObjectStorage: storage.LFS} contentStore := &lfs.ContentStore{ObjectStorage: storage.LFS}
for _, uploadInfo := range infos { for _, info := range infos {
if uploadInfo.lfsMetaObject == nil { if err := uploadToLFSContentStore(info, contentStore); err != nil {
continue
}
exist, err := contentStore.Exists(uploadInfo.lfsMetaObject)
if err != nil {
return cleanUpAfterFailure(&infos, t, err) return cleanUpAfterFailure(&infos, t, err)
} }
if !exist {
file, err := os.Open(uploadInfo.upload.LocalPath())
if err != nil {
return cleanUpAfterFailure(&infos, t, err)
}
defer file.Close()
// FIXME: Put regenerates the hash and copies the file over.
// I guess this strictly ensures the soundness of the store but this is inefficient.
if err := contentStore.Put(uploadInfo.lfsMetaObject, file); err != nil {
// OK Now we need to cleanup
// Can't clean up the store, once uploaded there they're there.
return cleanUpAfterFailure(&infos, t, err)
}
}
} }
// Then push this tree to NewBranch // Then push this tree to NewBranch
@@ -201,3 +151,62 @@ func UploadRepoFiles(repo *models.Repository, doer *models.User, opts *UploadRep
return models.DeleteUploads(uploads...) return models.DeleteUploads(uploads...)
} }
func copyUploadedLFSFileIntoRepository(info *uploadInfo, filename2attribute2info map[string]map[string]string, t *TemporaryUploadRepository, treePath string) error {
file, err := os.Open(info.upload.LocalPath())
if err != nil {
return err
}
defer file.Close()
var objectHash string
if setting.LFS.StartServer && filename2attribute2info[info.upload.Name] != nil && filename2attribute2info[info.upload.Name]["filter"] == "lfs" {
// Handle LFS
// FIXME: Inefficient! this should probably happen in models.Upload
oid, err := models.GenerateLFSOid(file)
if err != nil {
return err
}
fileInfo, err := file.Stat()
if err != nil {
return err
}
info.lfsMetaObject = &models.LFSMetaObject{Oid: oid, Size: fileInfo.Size(), RepositoryID: t.repo.ID}
if objectHash, err = t.HashObject(strings.NewReader(info.lfsMetaObject.Pointer())); err != nil {
return err
}
} else if objectHash, err = t.HashObject(file); err != nil {
return err
}
// Add the object to the index
return t.AddObjectToIndex("100644", objectHash, path.Join(treePath, info.upload.Name))
}
func uploadToLFSContentStore(info uploadInfo, contentStore *lfs.ContentStore) error {
if info.lfsMetaObject == nil {
return nil
}
exist, err := contentStore.Exists(info.lfsMetaObject)
if err != nil {
return err
}
if !exist {
file, err := os.Open(info.upload.LocalPath())
if err != nil {
return err
}
defer file.Close()
// FIXME: Put regenerates the hash and copies the file over.
// I guess this strictly ensures the soundness of the store but this is inefficient.
if err := contentStore.Put(info.lfsMetaObject, file); err != nil {
// OK Now we need to cleanup
// Can't clean up the store, once uploaded there they're there.
return err
}
}
return nil
}

View File

@@ -228,7 +228,7 @@ func ListUnadoptedRepositories(query string, opts *models.ListOptions) ([]string
found := false found := false
repoLoop: repoLoop:
for i, repo := range repos { for i, repo := range repos {
if repo.Name == name { if repo.LowerName == name {
found = true found = true
repos = append(repos[:i], repos[i+1:]...) repos = append(repos[:i], repos[i+1:]...)
break repoLoop break repoLoop

View File

@@ -64,6 +64,12 @@ func ForkRepository(doer, owner *models.User, oldRepo *models.Repository, name,
return err return err
} }
// copy lfs files failure should not be ignored
if err := models.CopyLFS(ctx, repo, oldRepo); err != nil {
rollbackRemoveFn()
return err
}
repoPath := models.RepoPath(owner.Name, repo.Name) repoPath := models.RepoPath(owner.Name, repo.Name)
if stdout, err := git.NewCommand( if stdout, err := git.NewCommand(
"clone", "--bare", oldRepoPath, repoPath). "clone", "--bare", oldRepoPath, repoPath).
@@ -92,6 +98,7 @@ func ForkRepository(doer, owner *models.User, oldRepo *models.Repository, name,
return nil, err return nil, err
} }
// even if below operations failed, it could be ignored. And they will be retried
ctx := models.DefaultDBContext() ctx := models.DefaultDBContext()
if err = repo.UpdateSize(ctx); err != nil { if err = repo.UpdateSize(ctx); err != nil {
log.Error("Failed to update size for repository: %v", err) log.Error("Failed to update size for repository: %v", err)
@@ -100,11 +107,5 @@ func ForkRepository(doer, owner *models.User, oldRepo *models.Repository, name,
log.Error("Copy language stat from oldRepo failed") log.Error("Copy language stat from oldRepo failed")
} }
if err := models.CopyLFS(ctx, repo, oldRepo); err != nil {
if errDelete := models.DeleteRepository(doer, owner.ID, repo.ID); errDelete != nil {
log.Error("Rollback deleteRepository: %v", errDelete)
}
return nil, err
}
return repo, nil return repo, nil
} }

View File

@@ -22,9 +22,53 @@ import (
func getHookTemplates() (hookNames, hookTpls, giteaHookTpls []string) { func getHookTemplates() (hookNames, hookTpls, giteaHookTpls []string) {
hookNames = []string{"pre-receive", "update", "post-receive"} hookNames = []string{"pre-receive", "update", "post-receive"}
hookTpls = []string{ hookTpls = []string{
fmt.Sprintf("#!/usr/bin/env %s\ndata=$(cat)\nexitcodes=\"\"\nhookname=$(basename $0)\nGIT_DIR=${GIT_DIR:-$(dirname $0)}\n\nfor hook in ${GIT_DIR}/hooks/${hookname}.d/*; do\ntest -x \"${hook}\" && test -f \"${hook}\" || continue\necho \"${data}\" | \"${hook}\"\nexitcodes=\"${exitcodes} $?\"\ndone\n\nfor i in ${exitcodes}; do\n[ ${i} -eq 0 ] || exit ${i}\ndone\n", setting.ScriptType), fmt.Sprintf(`#!/usr/bin/env %s
fmt.Sprintf("#!/usr/bin/env %s\nexitcodes=\"\"\nhookname=$(basename $0)\nGIT_DIR=${GIT_DIR:-$(dirname $0)}\n\nfor hook in ${GIT_DIR}/hooks/${hookname}.d/*; do\ntest -x \"${hook}\" && test -f \"${hook}\" || continue\n\"${hook}\" $1 $2 $3\nexitcodes=\"${exitcodes} $?\"\ndone\n\nfor i in ${exitcodes}; do\n[ ${i} -eq 0 ] || exit ${i}\ndone\n", setting.ScriptType), data=$(cat)
fmt.Sprintf("#!/usr/bin/env %s\ndata=$(cat)\nexitcodes=\"\"\nhookname=$(basename $0)\nGIT_DIR=${GIT_DIR:-$(dirname $0)}\n\nfor hook in ${GIT_DIR}/hooks/${hookname}.d/*; do\ntest -x \"${hook}\" && test -f \"${hook}\" || continue\necho \"${data}\" | \"${hook}\"\nexitcodes=\"${exitcodes} $?\"\ndone\n\nfor i in ${exitcodes}; do\n[ ${i} -eq 0 ] || exit ${i}\ndone\n", setting.ScriptType), exitcodes=""
hookname=$(basename $0)
GIT_DIR=${GIT_DIR:-$(dirname $0)/..}
for hook in ${GIT_DIR}/hooks/${hookname}.d/*; do
test -x "${hook}" && test -f "${hook}" || continue
echo "${data}" | "${hook}"
exitcodes="${exitcodes} $?"
done
for i in ${exitcodes}; do
[ ${i} -eq 0 ] || exit ${i}
done
`, setting.ScriptType),
fmt.Sprintf(`#!/usr/bin/env %s
exitcodes=""
hookname=$(basename $0)
GIT_DIR=${GIT_DIR:-$(dirname $0/..)}
for hook in ${GIT_DIR}/hooks/${hookname}.d/*; do
test -x "${hook}" && test -f "${hook}" || continue
"${hook}" $1 $2 $3
exitcodes="${exitcodes} $?"
done
for i in ${exitcodes}; do
[ ${i} -eq 0 ] || exit ${i}
done
`, setting.ScriptType),
fmt.Sprintf(`#!/usr/bin/env %s
data=$(cat)
exitcodes=""
hookname=$(basename $0)
GIT_DIR=${GIT_DIR:-$(dirname $0)/..}
for hook in ${GIT_DIR}/hooks/${hookname}.d/*; do
test -x "${hook}" && test -f "${hook}" || continue
echo "${data}" | "${hook}"
exitcodes="${exitcodes} $?"
done
for i in ${exitcodes}; do
[ ${i} -eq 0 ] || exit ${i}
done
`, setting.ScriptType),
} }
giteaHookTpls = []string{ giteaHookTpls = []string{
fmt.Sprintf("#!/usr/bin/env %s\n%s hook --config=%s pre-receive\n", setting.ScriptType, util.ShellEscape(setting.AppPath), util.ShellEscape(setting.CustomConf)), fmt.Sprintf("#!/usr/bin/env %s\n%s hook --config=%s pre-receive\n", setting.ScriptType, util.ShellEscape(setting.AppPath), util.ShellEscape(setting.CustomConf)),

View File

@@ -66,7 +66,7 @@ func (l *LocalStorage) Open(path string) (Object, error) {
} }
// Save a file // Save a file
func (l *LocalStorage) Save(path string, r io.Reader) (int64, error) { func (l *LocalStorage) Save(path string, r io.Reader, size int64) (int64, error) {
p := filepath.Join(l.dir, path) p := filepath.Join(l.dir, path)
if err := os.MkdirAll(filepath.Dir(p), os.ModePerm); err != nil { if err := os.MkdirAll(filepath.Dir(p), os.ModePerm); err != nil {
return 0, err return 0, err

View File

@@ -131,13 +131,13 @@ func (m *MinioStorage) Open(path string) (Object, error) {
} }
// Save save a file to minio // Save save a file to minio
func (m *MinioStorage) Save(path string, r io.Reader) (int64, error) { func (m *MinioStorage) Save(path string, r io.Reader, size int64) (int64, error) {
uploadInfo, err := m.client.PutObject( uploadInfo, err := m.client.PutObject(
m.ctx, m.ctx,
m.bucket, m.bucket,
m.buildMinioPath(path), m.buildMinioPath(path),
r, r,
-1, size,
minio.PutObjectOptions{ContentType: "application/octet-stream"}, minio.PutObjectOptions{ContentType: "application/octet-stream"},
) )
if err != nil { if err != nil {

View File

@@ -65,7 +65,8 @@ type Object interface {
// ObjectStorage represents an object storage to handle a bucket and files // ObjectStorage represents an object storage to handle a bucket and files
type ObjectStorage interface { type ObjectStorage interface {
Open(path string) (Object, error) Open(path string) (Object, error)
Save(path string, r io.Reader) (int64, error) // Save store a object, if size is unknown set -1
Save(path string, r io.Reader, size int64) (int64, error)
Stat(path string) (os.FileInfo, error) Stat(path string) (os.FileInfo, error)
Delete(path string) error Delete(path string) error
URL(path, name string) (*url.URL, error) URL(path, name string) (*url.URL, error)
@@ -80,7 +81,13 @@ func Copy(dstStorage ObjectStorage, dstPath string, srcStorage ObjectStorage, sr
} }
defer f.Close() defer f.Close()
return dstStorage.Save(dstPath, f) size := int64(-1)
fsinfo, err := f.Stat()
if err == nil {
size = fsinfo.Size()
}
return dstStorage.Save(dstPath, f, size)
} }
// SaveFrom saves data to the ObjectStorage with path p from the callback // SaveFrom saves data to the ObjectStorage with path p from the callback
@@ -94,7 +101,7 @@ func SaveFrom(objStorage ObjectStorage, p string, callback func(w io.Writer) err
} }
}() }()
_, err := objStorage.Save(p, pr) _, err := objStorage.Save(p, pr, -1)
return err return err
} }

View File

@@ -29,6 +29,7 @@ type LangType struct {
var ( var (
matcher language.Matcher matcher language.Matcher
allLangs []LangType allLangs []LangType
supportedTags []language.Tag
) )
// AllLangs returns all supported langauages // AllLangs returns all supported langauages
@@ -51,12 +52,12 @@ func InitLocales() {
} }
} }
tags := make([]language.Tag, len(setting.Langs)) supportedTags = make([]language.Tag, len(setting.Langs))
for i, lang := range setting.Langs { for i, lang := range setting.Langs {
tags[i] = language.Raw.Make(lang) supportedTags[i] = language.Raw.Make(lang)
} }
matcher = language.NewMatcher(tags) matcher = language.NewMatcher(supportedTags)
for i := range setting.Names { for i := range setting.Names {
key := "locale_" + setting.Langs[i] + ".ini" key := "locale_" + setting.Langs[i] + ".ini"
if err = i18n.SetMessageWithDesc(setting.Langs[i], setting.Names[i], localFiles[key]); err != nil { if err = i18n.SetMessageWithDesc(setting.Langs[i], setting.Names[i], localFiles[key]); err != nil {
@@ -79,8 +80,9 @@ func InitLocales() {
} }
// Match matches accept languages // Match matches accept languages
func Match(tags ...language.Tag) (tag language.Tag, index int, c language.Confidence) { func Match(tags ...language.Tag) language.Tag {
return matcher.Match(tags...) _, i, _ := matcher.Match(tags...)
return supportedTags[i]
} }
// locale represents the information of localization. // locale represents the information of localization.

View File

@@ -38,7 +38,7 @@ func Locale(resp http.ResponseWriter, req *http.Request) translation.Locale {
// The first element in the list is chosen to be the default language automatically. // The first element in the list is chosen to be the default language automatically.
if len(lang) == 0 { if len(lang) == 0 {
tags, _, _ := language.ParseAcceptLanguage(req.Header.Get("Accept-Language")) tags, _, _ := language.ParseAcceptLanguage(req.Header.Get("Accept-Language"))
tag, _, _ := translation.Match(tags...) tag := translation.Match(tags...)
lang = tag.String() lang = tag.String()
} }

View File

@@ -5,6 +5,7 @@
package web package web
import ( import (
goctx "context"
"fmt" "fmt"
"net/http" "net/http"
"reflect" "reflect"
@@ -27,6 +28,7 @@ func Wrap(handlers ...interface{}) http.HandlerFunc {
switch t := handler.(type) { switch t := handler.(type) {
case http.HandlerFunc, func(http.ResponseWriter, *http.Request), case http.HandlerFunc, func(http.ResponseWriter, *http.Request),
func(ctx *context.Context), func(ctx *context.Context),
func(ctx *context.Context) goctx.CancelFunc,
func(*context.APIContext), func(*context.APIContext),
func(*context.PrivateContext), func(*context.PrivateContext),
func(http.Handler) http.Handler: func(http.Handler) http.Handler:
@@ -48,6 +50,15 @@ func Wrap(handlers ...interface{}) http.HandlerFunc {
if r, ok := resp.(context.ResponseWriter); ok && r.Status() > 0 { if r, ok := resp.(context.ResponseWriter); ok && r.Status() > 0 {
return return
} }
case func(ctx *context.Context) goctx.CancelFunc:
ctx := context.GetContext(req)
cancel := t(ctx)
if cancel != nil {
defer cancel()
}
if ctx.Written() {
return
}
case func(ctx *context.Context): case func(ctx *context.Context):
ctx := context.GetContext(req) ctx := context.GetContext(req)
t(ctx) t(ctx)
@@ -68,10 +79,11 @@ func Wrap(handlers ...interface{}) http.HandlerFunc {
} }
case func(http.Handler) http.Handler: case func(http.Handler) http.Handler:
var next = http.HandlerFunc(func(http.ResponseWriter, *http.Request) {}) var next = http.HandlerFunc(func(http.ResponseWriter, *http.Request) {})
t(next).ServeHTTP(resp, req) if len(handlers) > i+1 {
if r, ok := resp.(context.ResponseWriter); ok && r.Status() > 0 { next = Wrap(handlers[i+1:]...)
return
} }
t(next).ServeHTTP(resp, req)
return
default: default:
panic(fmt.Sprintf("Unsupported handler type: %#v", t)) panic(fmt.Sprintf("Unsupported handler type: %#v", t))
} }
@@ -93,6 +105,23 @@ func Middle(f func(ctx *context.Context)) func(netx http.Handler) http.Handler {
} }
} }
// MiddleCancel wrap a context function as a chi middleware
func MiddleCancel(f func(ctx *context.Context) goctx.CancelFunc) func(netx http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(resp http.ResponseWriter, req *http.Request) {
ctx := context.GetContext(req)
cancel := f(ctx)
if cancel != nil {
defer cancel()
}
if ctx.Written() {
return
}
next.ServeHTTP(ctx.Resp, ctx.Req)
})
}
}
// MiddleAPI wrap a context function as a chi middleware // MiddleAPI wrap a context function as a chi middleware
func MiddleAPI(f func(ctx *context.APIContext)) func(netx http.Handler) http.Handler { func MiddleAPI(f func(ctx *context.APIContext)) func(netx http.Handler) http.Handler {
return func(next http.Handler) http.Handler { return func(next http.Handler) http.Handler {
@@ -162,6 +191,8 @@ func (r *Route) Use(middlewares ...interface{}) {
r.R.Use(t) r.R.Use(t)
case func(*context.Context): case func(*context.Context):
r.R.Use(Middle(t)) r.R.Use(Middle(t))
case func(*context.Context) goctx.CancelFunc:
r.R.Use(MiddleCancel(t))
case func(*context.APIContext): case func(*context.APIContext):
r.R.Use(MiddleAPI(t)) r.R.Use(MiddleAPI(t))
default: default:

View File

@@ -716,7 +716,7 @@ func Routes() *web.Route {
m.Group("/{username}/{reponame}", func() { m.Group("/{username}/{reponame}", func() {
m.Combo("").Get(reqAnyRepoReader(), repo.Get). m.Combo("").Get(reqAnyRepoReader(), repo.Get).
Delete(reqToken(), reqOwner(), repo.Delete). Delete(reqToken(), reqOwner(), repo.Delete).
Patch(reqToken(), reqAdmin(), context.RepoRefForAPI, bind(api.EditRepoOption{}), repo.Edit) Patch(reqToken(), reqAdmin(), bind(api.EditRepoOption{}), repo.Edit)
m.Post("/transfer", reqOwner(), bind(api.TransferRepoOption{}), repo.Transfer) m.Post("/transfer", reqOwner(), bind(api.TransferRepoOption{}), repo.Transfer)
m.Combo("/notifications"). m.Combo("/notifications").
Get(reqToken(), notify.ListRepoNotifications). Get(reqToken(), notify.ListRepoNotifications).

View File

@@ -16,7 +16,6 @@ import (
"code.gitea.io/gitea/modules/context" "code.gitea.io/gitea/modules/context"
"code.gitea.io/gitea/modules/convert" "code.gitea.io/gitea/modules/convert"
issue_indexer "code.gitea.io/gitea/modules/indexer/issues" issue_indexer "code.gitea.io/gitea/modules/indexer/issues"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/notification" "code.gitea.io/gitea/modules/notification"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
api "code.gitea.io/gitea/modules/structs" api "code.gitea.io/gitea/modules/structs"
@@ -113,11 +112,7 @@ func SearchIssues(ctx *context.APIContext) {
} }
// find repos user can access (for issue search) // find repos user can access (for issue search)
repoIDs := make([]int64, 0)
opts := &models.SearchRepoOptions{ opts := &models.SearchRepoOptions{
ListOptions: models.ListOptions{
PageSize: 15,
},
Private: false, Private: false,
AllPublic: true, AllPublic: true,
TopicOnly: false, TopicOnly: false,
@@ -132,23 +127,12 @@ func SearchIssues(ctx *context.APIContext) {
opts.AllLimited = true opts.AllLimited = true
} }
for page := 1; ; page++ { repoIDs, _, err := models.SearchRepositoryIDs(opts)
opts.Page = page
repos, count, err := models.SearchRepositoryByName(opts)
if err != nil { if err != nil {
ctx.Error(http.StatusInternalServerError, "SearchRepositoryByName", err) ctx.Error(http.StatusInternalServerError, "SearchRepositoryByName", err)
return return
} }
if len(repos) == 0 {
break
}
log.Trace("Processing next %d repos of %d", len(repos), count)
for _, repo := range repos {
repoIDs = append(repoIDs, repo.ID)
}
}
var issues []*models.Issue var issues []*models.Issue
var filteredCount int64 var filteredCount int64
@@ -157,7 +141,6 @@ func SearchIssues(ctx *context.APIContext) {
keyword = "" keyword = ""
} }
var issueIDs []int64 var issueIDs []int64
var labelIDs []int64
if len(keyword) > 0 && len(repoIDs) > 0 { if len(keyword) > 0 && len(repoIDs) > 0 {
if issueIDs, err = issue_indexer.SearchIssuesByKeyword(repoIDs, keyword); err != nil { if issueIDs, err = issue_indexer.SearchIssuesByKeyword(repoIDs, keyword); err != nil {
ctx.Error(http.StatusInternalServerError, "SearchIssuesByKeyword", err) ctx.Error(http.StatusInternalServerError, "SearchIssuesByKeyword", err)
@@ -192,7 +175,7 @@ func SearchIssues(ctx *context.APIContext) {
// Only fetch the issues if we either don't have a keyword or the search returned issues // Only fetch the issues if we either don't have a keyword or the search returned issues
// This would otherwise return all issues if no issues were found by the search. // This would otherwise return all issues if no issues were found by the search.
if len(keyword) == 0 || len(issueIDs) > 0 || len(labelIDs) > 0 { if len(keyword) == 0 || len(issueIDs) > 0 || len(includedLabelNames) > 0 {
issuesOpt := &models.IssuesOptions{ issuesOpt := &models.IssuesOptions{
ListOptions: models.ListOptions{ ListOptions: models.ListOptions{
Page: ctx.QueryInt("page"), Page: ctx.QueryInt("page"),

View File

@@ -202,8 +202,8 @@ func CreateRelease(ctx *context.APIContext) {
rel.Repo = ctx.Repo.Repository rel.Repo = ctx.Repo.Repository
rel.Publisher = ctx.User rel.Publisher = ctx.User
if err = releaseservice.UpdateReleaseOrCreatReleaseFromTag(ctx.User, ctx.Repo.GitRepo, rel, nil, true); err != nil { if err = releaseservice.UpdateRelease(ctx.User, ctx.Repo.GitRepo, rel, nil, nil, nil); err != nil {
ctx.Error(http.StatusInternalServerError, "UpdateReleaseOrCreatReleaseFromTag", err) ctx.Error(http.StatusInternalServerError, "UpdateRelease", err)
return return
} }
} }
@@ -277,8 +277,8 @@ func EditRelease(ctx *context.APIContext) {
if form.IsPrerelease != nil { if form.IsPrerelease != nil {
rel.IsPrerelease = *form.IsPrerelease rel.IsPrerelease = *form.IsPrerelease
} }
if err := releaseservice.UpdateReleaseOrCreatReleaseFromTag(ctx.User, ctx.Repo.GitRepo, rel, nil, false); err != nil { if err := releaseservice.UpdateRelease(ctx.User, ctx.Repo.GitRepo, rel, nil, nil, nil); err != nil {
ctx.Error(http.StatusInternalServerError, "UpdateReleaseOrCreatReleaseFromTag", err) ctx.Error(http.StatusInternalServerError, "UpdateRelease", err)
return return
} }

View File

@@ -578,7 +578,7 @@ func updateBasicProperties(ctx *context.APIContext, opts api.EditRepoOption) err
repo.IsTemplate = *opts.Template repo.IsTemplate = *opts.Template
} }
if ctx.Repo.GitRepo == nil { if ctx.Repo.GitRepo == nil && !repo.IsEmpty {
var err error var err error
ctx.Repo.GitRepo, err = git.OpenRepository(ctx.Repo.Repository.RepoPath()) ctx.Repo.GitRepo, err = git.OpenRepository(ctx.Repo.Repository.RepoPath())
if err != nil { if err != nil {
@@ -589,15 +589,15 @@ func updateBasicProperties(ctx *context.APIContext, opts api.EditRepoOption) err
} }
// Default branch only updated if changed and exist or the repository is empty // Default branch only updated if changed and exist or the repository is empty
if opts.DefaultBranch != nil && if opts.DefaultBranch != nil && repo.DefaultBranch != *opts.DefaultBranch && (repo.IsEmpty || ctx.Repo.GitRepo.IsBranchExist(*opts.DefaultBranch)) {
repo.DefaultBranch != *opts.DefaultBranch && if !repo.IsEmpty {
(ctx.Repo.Repository.IsEmpty || ctx.Repo.GitRepo.IsBranchExist(*opts.DefaultBranch)) {
if err := ctx.Repo.GitRepo.SetDefaultBranch(*opts.DefaultBranch); err != nil { if err := ctx.Repo.GitRepo.SetDefaultBranch(*opts.DefaultBranch); err != nil {
if !git.IsErrUnsupportedVersion(err) { if !git.IsErrUnsupportedVersion(err) {
ctx.Error(http.StatusInternalServerError, "SetDefaultBranch", err) ctx.Error(http.StatusInternalServerError, "SetDefaultBranch", err)
return err return err
} }
} }
}
repo.DefaultBranch = *opts.DefaultBranch repo.DefaultBranch = *opts.DefaultBranch
} }

View File

@@ -274,7 +274,11 @@ func DeleteOauth2Application(ctx *context.APIContext) {
// "$ref": "#/responses/empty" // "$ref": "#/responses/empty"
appID := ctx.ParamsInt64(":id") appID := ctx.ParamsInt64(":id")
if err := models.DeleteOAuth2Application(appID, ctx.User.ID); err != nil { if err := models.DeleteOAuth2Application(appID, ctx.User.ID); err != nil {
if models.IsErrOAuthApplicationNotFound(err) {
ctx.NotFound()
} else {
ctx.Error(http.StatusInternalServerError, "DeleteOauth2ApplicationByID", err) ctx.Error(http.StatusInternalServerError, "DeleteOauth2ApplicationByID", err)
}
return return
} }

View File

@@ -30,6 +30,17 @@ func Events(ctx *context.Context) {
ctx.Resp.Header().Set("X-Accel-Buffering", "no") ctx.Resp.Header().Set("X-Accel-Buffering", "no")
ctx.Resp.WriteHeader(http.StatusOK) ctx.Resp.WriteHeader(http.StatusOK)
if !ctx.IsSigned {
// Return unauthorized status event
event := (&eventsource.Event{
Name: "close",
Data: "unauthorized",
})
_, _ = event.WriteTo(ctx)
ctx.Resp.Flush()
return
}
// Listen to connection close and un-register messageChan // Listen to connection close and un-register messageChan
notify := ctx.Req.Context().Done() notify := ctx.Req.Context().Done()
ctx.Resp.Flush() ctx.Resp.Flush()

View File

@@ -10,6 +10,7 @@ import (
"code.gitea.io/gitea/models" "code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/context" "code.gitea.io/gitea/modules/context"
"code.gitea.io/gitea/modules/httpcache"
"code.gitea.io/gitea/modules/log" "code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/storage" "code.gitea.io/gitea/modules/storage"
@@ -124,21 +125,25 @@ func GetAttachment(ctx *context.Context) {
} }
} }
if err := attach.IncreaseDownloadCount(); err != nil {
ctx.ServerError("IncreaseDownloadCount", err)
return
}
if setting.Attachment.ServeDirect { if setting.Attachment.ServeDirect {
//If we have a signed url (S3, object storage), redirect to this directly. //If we have a signed url (S3, object storage), redirect to this directly.
u, err := storage.Attachments.URL(attach.RelativePath(), attach.Name) u, err := storage.Attachments.URL(attach.RelativePath(), attach.Name)
if u != nil && err == nil { if u != nil && err == nil {
if err := attach.IncreaseDownloadCount(); err != nil {
ctx.ServerError("Update", err)
return
}
ctx.Redirect(u.String()) ctx.Redirect(u.String())
return return
} }
} }
if httpcache.HandleGenericETagCache(ctx.Req, ctx.Resp, `"`+attach.UUID+`"`) {
return
}
//If we have matched and access to release or issue //If we have matched and access to release or issue
fr, err := storage.Attachments.Open(attach.RelativePath()) fr, err := storage.Attachments.Open(attach.RelativePath())
if err != nil { if err != nil {
@@ -147,11 +152,6 @@ func GetAttachment(ctx *context.Context) {
} }
defer fr.Close() defer fr.Close()
if err := attach.IncreaseDownloadCount(); err != nil {
ctx.ServerError("Update", err)
return
}
if err = ServeData(ctx, attach.Name, attach.Size, fr); err != nil { if err = ServeData(ctx, attach.Name, attach.Size, fr); err != nil {
ctx.ServerError("ServeData", err) ctx.ServerError("ServeData", err)
return return

Some files were not shown because too many files have changed in this diff Show More