Compare commits

..

3 Commits

Author SHA1 Message Date
GiteaBot
e4a3785218 [skip ci] Updated translations via Crowdin 2020-10-14 21:45:21 +00:00
techknowlogick
76ac83402b Clean up mysql service in drone (#13145) 2020-10-14 17:44:18 -04:00
GiteaBot
07c9f6dca4 [skip ci] Updated translations via Crowdin 2020-10-14 18:49:08 +00:00
250 changed files with 1503 additions and 4204 deletions

View File

@@ -666,6 +666,7 @@ steps:
event: event:
exclude: exclude:
- pull_request - pull_request
--- ---
kind: pipeline kind: pipeline
name: docker-linux-arm64-dry-run name: docker-linux-arm64-dry-run
@@ -695,9 +696,6 @@ steps:
tags: linux-arm64 tags: linux-arm64
build_args: build_args:
- GOPROXY=off - GOPROXY=off
environment:
PLUGIN_MIRROR:
from_secret: plugin_mirror
when: when:
event: event:
- pull_request - pull_request
@@ -742,13 +740,11 @@ steps:
from_secret: docker_password from_secret: docker_password
username: username:
from_secret: docker_username from_secret: docker_username
environment:
PLUGIN_MIRROR:
from_secret: plugin_mirror
when: when:
event: event:
exclude: exclude:
- pull_request - pull_request
--- ---
kind: pipeline kind: pipeline
name: docker-manifest name: docker-manifest

View File

@@ -4,85 +4,14 @@ This changelog goes through all the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.io). been added to each release, please refer to the [blog](https://blog.gitea.io).
## [1.13.2](https://github.com/go-gitea/gitea/releases/tag/v1.13.2) - 2021-01-31 ## [1.13.0-RC1](https://github.com/go-gitea/gitea/releases/tag/v1.13.0-RC1) - 2020-10-14
* SECURITY * SECURITY
* Prevent panic on fuzzer provided string (#14405) (#14409)
* Add secure/httpOnly attributes to the lang cookie (#14279) (#14280)
* API
* If release publisher is deleted use ghost user (#14375)
* BUGFIXES
* Internal ssh server respect Ciphers, MACs and KeyExchanges settings (#14523) (#14530)
* Set the name Mapper in migrations (#14526) (#14529)
* Fix wiki preview (#14515)
* Update code.gitea.io/sdk/gitea v0.13.1 -> v0.13.2 (#14497)
* ChangeUserName: rename user files back on DB issue (#14447)
* Fix lfs preview bug (#14428) (#14433)
* Ensure timeout error is shown on u2f timeout (#14417) (#14431)
* Fix Deadlock & Delete affected reactions on comment deletion (#14392) (#14425)
* Use path not filepath in routers/editor (#14390) (#14396)
* Check if label template exist first (#14384) (#14389)
* Fix migration v141 (#14387) (#14388)
* Use Request.URL.RequestURI() for fcgi (#14347)
* Use ServerError provided by Context (#14333) (#14345)
* Fix edit-label form init (#14337)
* Fix mailIssueCommentBatch for pull request (#14252) (#14296)
* Render links for commit hashes followed by comma (#14224) (#14227)
* Send notifications for mentions in pulls, issues, (code-)comments (#14218) (#14221)
* Fix avatar bugs (#14217) (#14220)
* Ensure that schema search path is set with every connection on postgres (#14131) (#14216)
* Fix dashboard issues labels filter bug (#14210) (#14214)
* When visit /favicon.ico but the static file is not exist return 404 but not continue to handle the route (#14211) (#14213)
* Fix branch selector on new issue page (#14194) (#14207)
* Check for notExist on profile repository page (#14197) (#14203)
## [1.13.1](https://github.com/go-gitea/gitea/releases/tag/v1.13.1) - 2020-12-29
* SECURITY
* Hide private participation in Orgs (#13994) (#14031)
* Fix escaping issue in diff (#14153) (#14154)
* BUGFIXES
* Fix bug of link query order on markdown render (#14156) (#14171)
* Drop long repo topics during migration (#14152) (#14155)
* Ensure that search term and page are not lost on adoption page-turn (#14133) (#14143)
* Fix storage config implementation (#14091) (#14095)
* Fix panic in BasicAuthDecode (#14046) (#14048)
* Always wait for the cmd to finish (#14006) (#14039)
* Don't use simpleMDE editor on mobile devices for 1.13 (#14029)
* Fix incorrect review comment diffs (#14002) (#14011)
* Trim the branch prefix from action.GetBranch (#13981) (#13986)
* Ensure template renderer is available before storage handler (#13164) (#13982)
* Whenever the password is updated ensure that the hash algorithm is too (#13966) (#13967)
* Enforce setting HEAD in wiki to master (#13950) (#13961)
* Fix feishu webhook caused by API changed (#13938)
* Fix Quote Reply button on review diff (#13830) (#13898)
* Fix Pull Merge when tag with same name as base branch exist (#13882) (#13896)
* Fix mermaid chart size (#13865)
* Fix branch/tag notifications in mirror sync (#13855) (#13862)
* Fix crash in short link processor (#13839) (#13841)
* Update font stack to bootstrap's latest (#13834) (#13837)
* Make sure email recipients can see issue (#13820) (#13827)
* Reply button is not removed when deleting a code review comment (#13824)
* When reinitialising DBConfig reset the database use flags (#13796) (#13811)
* ENHANCEMENTS
* Add emoji in label to project boards (#13978) (#14021)
* Send webhook when tag is removed via Web UI (#14015) (#14019)
* Use Process Manager to create own Context (#13792) (#13793)
* API
* GetCombinedCommitStatusByRef always return json & swagger doc fixes (#14047)
* Return original URL of Repositories (#13885) (#13886)
## [1.13.0](https://github.com/go-gitea/gitea/releases/tag/v1.13.0) - 2020-12-01
* SECURITY
* Add Allow-/Block-List for Migrate & Mirrors (#13610) (#13776)
* Prevent git operations for inactive users (#13527) (#13536)
* Disallow urlencoded new lines in git protocol paths if there is a port (#13521) (#13524)
* Mitigate Security vulnerability in the git hook feature (#13058) * Mitigate Security vulnerability in the git hook feature (#13058)
* Disable DSA ssh keys by default (#13056) * Disable DSA ssh keys by default (#13056)
* Set TLS minimum version to 1.2 (#12689) * Set TLS minimum version to 1.2 (#12689)
* Use argon as default password hash algorithm (#12688) * Use argon as default password hash algorithm (#12688)
* BREAKING * BREAKING
* Set RUN_MODE prod by default (#13765) (#13767)
* Don't replace underscores in auto-generated IDs in goldmark (#12805) * Don't replace underscores in auto-generated IDs in goldmark (#12805)
* Add Primary Key to Topic and RepoTopic tables (#12639) * Add Primary Key to Topic and RepoTopic tables (#12639)
* Disable password complexity check default (#12557) * Disable password complexity check default (#12557)
@@ -142,40 +71,6 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Add endpoint for Branch Creation (#11607) * Add endpoint for Branch Creation (#11607)
* Add pagination headers on endpoints that support total count from database (#11145) * Add pagination headers on endpoints that support total count from database (#11145)
* BUGFIXES * BUGFIXES
* Fix bogus http requests on diffs (#13760) (#13761)
* Show 'owner' tag for real owner (#13689) (#13743)
* Validate email before inserting/updating (#13475) (#13666)
* Fix issue/pull request list assignee filter (#13647) (#13651)
* Gitlab migration support for subdirectories (#13563) (#13591)
* Fix logic for preferred license setting (#13550) (#13557)
* Add missed sync branch/tag webhook (#13538) (#13556)
* Migration won't fail on non-migrated reactions (#13507)
* Fix Italian language file parsing error (#13156)
* Show outdated comments in pull request (#13148) (#13162)
* Fix parsing of pre-release git version (#13169) (#13172)
* Fix diff skipping lines (#13154) (#13155)
* When handling errors in storageHandler check underlying error (#13178) (#13193)
* Fix size and clickable area on file table back link (#13205) (#13207)
* Add better error checking for inline html diff code (#13251)
* Fix initial commit page & binary munching problem (#13249) (#13258)
* Fix migrations from remote Gitea instances when configuration not set (#13229) (#13273)
* Store task errors following migrations and display them (#13246) (#13287)
* Fix bug isEnd detection on getIssues/getPullRequests (#13299) (#13301)
* When the git ref is unable to be found return broken pr (#13218) (#13303)
* Ensure topics added using the API are added to the repository (#13285) (#13302)
* Fix avatar autogeneration (#13233) (#13282)
* Add migrated pulls to pull request task queue (#13331) (#13334)
* Issue comment reactions should also check pull type on API (#13349) (#13350)
* Fix links to repositories in /user/setting/repos (#13360) (#13362)
* Remove obsolete change of email on profile page (#13341) (#13347)
* Fix scrolling to resolved comment anchors (#13343) (#13371)
* Storage configuration support `[storage]` (#13314) (#13379)
* When creating line diffs do not split within an html entity (#13357) (#13375) (#13425) (#13427)
* Fix reactions on code comments (#13390) (#13401)
* Add missing full names when DEFAULT_SHOW_FULL_NAME is enabled (#13424)
* Replies to outdated code comments should also be outdated (#13217) (#13433)
* Fix panic bug in handling multiple references in commit (#13486) (#13487)
* Prevent panic on git blame by limiting lines to 4096 bytes at most (#13470) (#13491)
* Show original author's reviews on pull summary box (#13127) * Show original author's reviews on pull summary box (#13127)
* Update golangci-lint to version 1.31.0 (#13102) * Update golangci-lint to version 1.31.0 (#13102)
* Fix line break for MS teams webhook (#13081) * Fix line break for MS teams webhook (#13081)
@@ -245,10 +140,6 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Fix Enter not working in SimpleMDE (#11564) * Fix Enter not working in SimpleMDE (#11564)
* Fix bug about can't skip commits base on base branch (#11555) * Fix bug about can't skip commits base on base branch (#11555)
* ENHANCEMENTS * ENHANCEMENTS
* Only Return JSON for responses (#13511) (#13565)
* Use existing analyzer module for language detection for highlighting (#13522) (#13551)
* Return the full rejection message and errors in flash errors (#13221) (#13237)
* Remove PAM from auth dropdown when unavailable (#13276) (#13281)
* Add HostCertificate to sshd_config in Docker image (#13143) * Add HostCertificate to sshd_config in Docker image (#13143)
* Save TimeStamps for Star, Label, Follow, Watch and Collaboration to Database (#13124) * Save TimeStamps for Star, Label, Follow, Watch and Collaboration to Database (#13124)
* Improve error feedback for duplicate deploy keys (#13112) * Improve error feedback for duplicate deploy keys (#13112)

View File

@@ -638,8 +638,8 @@ fomantic: $(FOMANTIC_DEST)
$(FOMANTIC_DEST): $(FOMANTIC_CONFIGS) | node_modules $(FOMANTIC_DEST): $(FOMANTIC_CONFIGS) | node_modules
rm -rf $(FOMANTIC_DEST_DIR) rm -rf $(FOMANTIC_DEST_DIR)
cp -f web_src/fomantic/theme.config.less node_modules/fomantic-ui/src/theme.config cp web_src/fomantic/theme.config.less node_modules/fomantic-ui/src/theme.config
cp -fr web_src/fomantic/_site/* node_modules/fomantic-ui/src/_site/ cp -r web_src/fomantic/_site/* node_modules/fomantic-ui/src/_site/
npx gulp -f node_modules/fomantic-ui/gulpfile.js build npx gulp -f node_modules/fomantic-ui/gulpfile.js build
@touch $(FOMANTIC_DEST) @touch $(FOMANTIC_DEST)

View File

@@ -283,7 +283,7 @@ func runChangePassword(c *cli.Context) error {
} }
user.HashPassword(c.String("password")) user.HashPassword(c.String("password"))
if err := models.UpdateUserCols(user, "passwd", "passwd_hash_algo", "salt"); err != nil { if err := models.UpdateUserCols(user, "passwd", "salt"); err != nil {
return err return err
} }

View File

@@ -8,8 +8,8 @@
APP_NAME = Gitea: Git with a cup of tea APP_NAME = Gitea: Git with a cup of tea
; Change it if you run locally ; Change it if you run locally
RUN_USER = git RUN_USER = git
; Application run mode, affects performance and debugging. Either "dev", "prod" or "test", default is "prod" ; Either "dev", "prod" or "test", default is "dev"
RUN_MODE = prod RUN_MODE = dev
[project] [project]
; Default templates for project boards ; Default templates for project boards
@@ -850,7 +850,7 @@ MACARON = file
ROUTER_LOG_LEVEL = Info ROUTER_LOG_LEVEL = Info
ROUTER = console ROUTER = console
ENABLE_ACCESS_LOG = false ENABLE_ACCESS_LOG = false
ACCESS_LOG_TEMPLATE = {{.Ctx.RemoteAddr}} - {{.Identity}} {{.Start.Format "[02/Jan/2006:15:04:05 -0700]" }} "{{.Ctx.Req.Method}} {{.Ctx.Req.URL.RequestURI}} {{.Ctx.Req.Proto}}" {{.ResponseWriter.Status}} {{.ResponseWriter.Size}} "{{.Ctx.Req.Referer}}\" \"{{.Ctx.Req.UserAgent}}" ACCESS_LOG_TEMPLATE = {{.Ctx.RemoteAddr}} - {{.Identity}} {{.Start.Format "[02/Jan/2006:15:04:05 -0700]" }} "{{.Ctx.Req.Method}} {{.Ctx.Req.RequestURI}} {{.Ctx.Req.Proto}}" {{.ResponseWriter.Status}} {{.ResponseWriter.Size}} "{{.Ctx.Req.Referer}}\" \"{{.Ctx.Req.UserAgent}}"
ACCESS = file ACCESS = file
; Either "Trace", "Debug", "Info", "Warn", "Error", "Critical", default is "Trace" ; Either "Trace", "Debug", "Info", "Warn", "Error", "Critical", default is "Trace"
LEVEL = Info LEVEL = Info
@@ -1188,14 +1188,6 @@ QUEUE_CONN_STR = "addrs=127.0.0.1:6379 db=0"
MAX_ATTEMPTS = 3 MAX_ATTEMPTS = 3
; Backoff time per http/https request retry (seconds) ; Backoff time per http/https request retry (seconds)
RETRY_BACKOFF = 3 RETRY_BACKOFF = 3
; Allowed domains for migrating, default is blank. Blank means everything will be allowed.
; Multiple domains could be separated by commas.
ALLOWED_DOMAINS =
; Blocklist for migrating, default is blank. Multiple domains could be separated by commas.
; When ALLOWED_DOMAINS is not blank, this option will be ignored.
BLOCKED_DOMAINS =
; Allow private addresses defined by RFC 1918, RFC 1122, RFC 4632 and RFC 4291 (false by default)
ALLOW_LOCALNETWORKS = false
; default storage for attachments, lfs and avatars ; default storage for attachments, lfs and avatars
[storage] [storage]

View File

@@ -25,7 +25,7 @@ if [ ! -f ${GITEA_CUSTOM}/conf/app.ini ]; then
# Substitude the environment variables in the template # Substitude the environment variables in the template
APP_NAME=${APP_NAME:-"Gitea: Git with a cup of tea"} \ APP_NAME=${APP_NAME:-"Gitea: Git with a cup of tea"} \
RUN_MODE=${RUN_MODE:-"prod"} \ RUN_MODE=${RUN_MODE:-"dev"} \
DOMAIN=${DOMAIN:-"localhost"} \ DOMAIN=${DOMAIN:-"localhost"} \
SSH_DOMAIN=${SSH_DOMAIN:-"localhost"} \ SSH_DOMAIN=${SSH_DOMAIN:-"localhost"} \
HTTP_PORT=${HTTP_PORT:-"3000"} \ HTTP_PORT=${HTTP_PORT:-"3000"} \

View File

@@ -36,7 +36,9 @@ Values containing `#` or `;` must be quoted using `` ` `` or `"""`.
- `APP_NAME`: **Gitea: Git with a cup of tea**: Application name, used in the page title. - `APP_NAME`: **Gitea: Git with a cup of tea**: Application name, used in the page title.
- `RUN_USER`: **git**: The user Gitea will run as. This should be a dedicated system - `RUN_USER`: **git**: The user Gitea will run as. This should be a dedicated system
(non-user) account. Setting this incorrectly will cause Gitea to not start. (non-user) account. Setting this incorrectly will cause Gitea to not start.
- `RUN_MODE`: **prod**: Application run mode, affects performance and debugging. Either "dev", "prod" or "test". - `RUN_MODE`: **dev**: For performance and other purposes, change this to `prod` when
deployed to a production environment. The installation process will set this to `prod`
automatically. \[prod, dev, test\]
## Repository (`repository`) ## Repository (`repository`)
@@ -811,9 +813,6 @@ Task queue configuration has been moved to `queue.task`. However, the below conf
- `MAX_ATTEMPTS`: **3**: Max attempts per http/https request on migrations. - `MAX_ATTEMPTS`: **3**: Max attempts per http/https request on migrations.
- `RETRY_BACKOFF`: **3**: Backoff time per http/https request retry (seconds) - `RETRY_BACKOFF`: **3**: Backoff time per http/https request retry (seconds)
- `ALLOWED_DOMAINS`: **\<empty\>**: Domains allowlist for migrating repositories, default is blank. It means everything will be allowed. Multiple domains could be separated by commas.
- `BLOCKED_DOMAINS`: **\<empty\>**: Domains blocklist for migrating repositories, default is blank. Multiple domains could be separated by commas. When `ALLOWED_DOMAINS` is not blank, this option will be ignored.
- `ALLOW_LOCALNETWORKS`: **false**: Allow private addresses defined by RFC 1918, RFC 1122, RFC 4632 and RFC 4291
## Mirror (`mirror`) ## Mirror (`mirror`)

View File

@@ -313,9 +313,6 @@ IS_INPUT_FILE = false
- `MAX_ATTEMPTS`: **3**: 在迁移过程中的 http/https 请求重试次数。 - `MAX_ATTEMPTS`: **3**: 在迁移过程中的 http/https 请求重试次数。
- `RETRY_BACKOFF`: **3**: 等待下一次重试的时间,单位秒。 - `RETRY_BACKOFF`: **3**: 等待下一次重试的时间,单位秒。
- `ALLOWED_DOMAINS`: **\<empty\>**: 迁移仓库的域名白名单,默认为空,表示允许从任意域名迁移仓库,多个域名用逗号分隔。
- `BLOCKED_DOMAINS`: **\<empty\>**: 迁移仓库的域名黑名单,默认为空,多个域名用逗号分隔。如果 `ALLOWED_DOMAINS` 不为空,此选项将会被忽略。
- `ALLOW_LOCALNETWORKS`: **false**: Allow private addresses defined by RFC 1918
## LFS (`lfs`) ## LFS (`lfs`)

View File

@@ -30,8 +30,6 @@ All event pushes are POST requests. The methods currently supported are:
### Event information ### Event information
**WARNING**: The `secret` field in the payload is deprecated as of Gitea 1.13.0 and will be removed in 1.14.0: https://github.com/go-gitea/gitea/issues/11755
The following is an example of event information that will be sent by Gitea to The following is an example of event information that will be sent by Gitea to
a Payload URL: a Payload URL:

View File

@@ -257,7 +257,7 @@ You can configure some of Gitea's settings via environment variables:
(Default values are provided in **bold**) (Default values are provided in **bold**)
* `APP_NAME`: **"Gitea: Git with a cup of tea"**: Application name, used in the page title. * `APP_NAME`: **"Gitea: Git with a cup of tea"**: Application name, used in the page title.
* `RUN_MODE`: **prod**: Application run mode, affects performance and debugging. Either "dev", "prod" or "test". * `RUN_MODE`: **dev**: For performance and other purposes, change this to `prod` when deployed to a production environment.
* `DOMAIN`: **localhost**: Domain name of this server, used for the displayed http clone URL in Gitea's UI. * `DOMAIN`: **localhost**: Domain name of this server, used for the displayed http clone URL in Gitea's UI.
* `SSH_DOMAIN`: **localhost**: Domain name of this server, used for the displayed ssh clone URL in Gitea's UI. If the install page is enabled, SSH Domain Server takes DOMAIN value in the form (which overwrite this setting on save). * `SSH_DOMAIN`: **localhost**: Domain name of this server, used for the displayed ssh clone URL in Gitea's UI. If the install page is enabled, SSH Domain Server takes DOMAIN value in the form (which overwrite this setting on save).
* `SSH_PORT`: **22**: SSH port displayed in clone URL. * `SSH_PORT`: **22**: SSH port displayed in clone URL.

10
go.mod
View File

@@ -4,7 +4,7 @@ go 1.14
require ( require (
code.gitea.io/gitea-vet v0.2.1 code.gitea.io/gitea-vet v0.2.1
code.gitea.io/sdk/gitea v0.13.2 code.gitea.io/sdk/gitea v0.13.1
gitea.com/lunny/levelqueue v0.3.0 gitea.com/lunny/levelqueue v0.3.0
gitea.com/macaron/binding v0.0.0-20190822013154-a5f53841ed2b gitea.com/macaron/binding v0.0.0-20190822013154-a5f53841ed2b
gitea.com/macaron/cache v0.0.0-20190822004001-a6e7fee4ee76 gitea.com/macaron/cache v0.0.0-20190822004001-a6e7fee4ee76
@@ -104,7 +104,7 @@ require (
github.com/yuin/goldmark-meta v0.0.0-20191126180153-f0638e958b60 github.com/yuin/goldmark-meta v0.0.0-20191126180153-f0638e958b60
go.jolheiser.com/hcaptcha v0.0.4 go.jolheiser.com/hcaptcha v0.0.4
go.jolheiser.com/pwn v0.0.3 go.jolheiser.com/pwn v0.0.3
golang.org/x/crypto v0.0.0-20201217014255-9d1352758620 golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a
golang.org/x/net v0.0.0-20200904194848-62affa334b73 golang.org/x/net v0.0.0-20200904194848-62affa334b73
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d
golang.org/x/sys v0.0.0-20200918174421-af09f7315aff golang.org/x/sys v0.0.0-20200918174421-af09f7315aff
@@ -117,12 +117,10 @@ require (
gopkg.in/ini.v1 v1.61.0 gopkg.in/ini.v1 v1.61.0
gopkg.in/ldap.v3 v3.0.2 gopkg.in/ldap.v3 v3.0.2
gopkg.in/yaml.v2 v2.3.0 gopkg.in/yaml.v2 v2.3.0
mvdan.cc/xurls/v2 v2.2.0 mvdan.cc/xurls/v2 v2.1.0
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251 strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251
xorm.io/builder v0.3.7 xorm.io/builder v0.3.7
xorm.io/xorm v1.0.5 xorm.io/xorm v1.0.5
) )
replace github.com/hashicorp/go-version => github.com/6543/go-version v1.2.4 replace github.com/hashicorp/go-version => github.com/6543/go-version v1.2.3
replace github.com/microcosm-cc/bluemonday => github.com/lunny/bluemonday v1.0.5-0.20201227154428-ca34796141e8

22
go.sum
View File

@@ -15,8 +15,8 @@ cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2k
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw= cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
code.gitea.io/gitea-vet v0.2.1 h1:b30by7+3SkmiftK0RjuXqFvZg2q4p68uoPGuxhzBN0s= code.gitea.io/gitea-vet v0.2.1 h1:b30by7+3SkmiftK0RjuXqFvZg2q4p68uoPGuxhzBN0s=
code.gitea.io/gitea-vet v0.2.1/go.mod h1:zcNbT/aJEmivCAhfmkHOlT645KNOf9W2KnkLgFjGGfE= code.gitea.io/gitea-vet v0.2.1/go.mod h1:zcNbT/aJEmivCAhfmkHOlT645KNOf9W2KnkLgFjGGfE=
code.gitea.io/sdk/gitea v0.13.2 h1:wAnT/J7Z62q3fJXbgnecoaOBh8CM1Qq0/DakWxiv4yA= code.gitea.io/sdk/gitea v0.13.1 h1:Y7bpH2iO6Q0KhhMJfjP/LZ0AmiYITeRQlCD8b0oYqhk=
code.gitea.io/sdk/gitea v0.13.2/go.mod h1:lee2y8LeV3kQb2iK+hHlMqoadL4bp27QOkOV/hawLKg= code.gitea.io/sdk/gitea v0.13.1/go.mod h1:z3uwDV/b9Ls47NGukYM9XhnHtqPh/J+t40lsUrR6JDY=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
gitea.com/lunny/levelqueue v0.3.0 h1:MHn1GuSZkxvVEDMyAPqlc7A3cOW+q8RcGhRgH/xtm6I= gitea.com/lunny/levelqueue v0.3.0 h1:MHn1GuSZkxvVEDMyAPqlc7A3cOW+q8RcGhRgH/xtm6I=
gitea.com/lunny/levelqueue v0.3.0/go.mod h1:HBqmLbz56JWpfEGG0prskAV97ATNRoj5LDmPicD22hU= gitea.com/lunny/levelqueue v0.3.0/go.mod h1:HBqmLbz56JWpfEGG0prskAV97ATNRoj5LDmPicD22hU=
@@ -48,8 +48,8 @@ gitea.com/macaron/toolbox v0.0.0-20190822013122-05ff0fc766b7 h1:N9QFoeNsUXLhl14m
gitea.com/macaron/toolbox v0.0.0-20190822013122-05ff0fc766b7/go.mod h1:kgsbFPPS4P+acDYDOPDa3N4IWWOuDJt5/INKRUz7aks= gitea.com/macaron/toolbox v0.0.0-20190822013122-05ff0fc766b7/go.mod h1:kgsbFPPS4P+acDYDOPDa3N4IWWOuDJt5/INKRUz7aks=
gitea.com/xorm/sqlfiddle v0.0.0-20180821085327-62ce714f951a h1:lSA0F4e9A2NcQSqGqTOXqu2aRi/XEQxDCBwM8yJtE6s= gitea.com/xorm/sqlfiddle v0.0.0-20180821085327-62ce714f951a h1:lSA0F4e9A2NcQSqGqTOXqu2aRi/XEQxDCBwM8yJtE6s=
gitea.com/xorm/sqlfiddle v0.0.0-20180821085327-62ce714f951a/go.mod h1:EXuID2Zs0pAQhH8yz+DNjUbjppKQzKFAn28TMYPB6IU= gitea.com/xorm/sqlfiddle v0.0.0-20180821085327-62ce714f951a/go.mod h1:EXuID2Zs0pAQhH8yz+DNjUbjppKQzKFAn28TMYPB6IU=
github.com/6543/go-version v1.2.4 h1:MPsSnqNrM0HwA9tnmWNnsMdQMg4/u4fflARjwomoof4= github.com/6543/go-version v1.2.3 h1:uF30BawMhoQLzqBeCwhFcWM6HVxlzMHe/zXbzJeKP+o=
github.com/6543/go-version v1.2.4/go.mod h1:oqFAHCwtLVUTLdhQmVZWYvaHXTdsbB4SY85at64SQEo= github.com/6543/go-version v1.2.3/go.mod h1:fcfWh4zkneEgGXe8JJptiGwp8l6JgJJgS7oTw6P83So=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
@@ -598,8 +598,6 @@ github.com/lib/pq v1.3.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.7.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= github.com/lib/pq v1.7.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.8.1-0.20200908161135-083382b7e6fc h1:ERSU1OvZ6MdWhHieo2oT7xwR/HCksqKdgK6iYPU5pHI= github.com/lib/pq v1.8.1-0.20200908161135-083382b7e6fc h1:ERSU1OvZ6MdWhHieo2oT7xwR/HCksqKdgK6iYPU5pHI=
github.com/lib/pq v1.8.1-0.20200908161135-083382b7e6fc/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= github.com/lib/pq v1.8.1-0.20200908161135-083382b7e6fc/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lunny/bluemonday v1.0.5-0.20201227154428-ca34796141e8 h1:1omo92DLtxQu6VwVPSZAmduHaK5zssed6cvkHyl1XOg=
github.com/lunny/bluemonday v1.0.5-0.20201227154428-ca34796141e8/go.mod h1:8iwZnFn2CDDNZ0r6UXhF4xawGvzaqzCRa1n3/lO3W2w=
github.com/lunny/dingtalk_webhook v0.0.0-20171025031554-e3534c89ef96 h1:uNwtsDp7ci48vBTTxDuwcoTXz4lwtDTe7TjCQ0noaWY= github.com/lunny/dingtalk_webhook v0.0.0-20171025031554-e3534c89ef96 h1:uNwtsDp7ci48vBTTxDuwcoTXz4lwtDTe7TjCQ0noaWY=
github.com/lunny/dingtalk_webhook v0.0.0-20171025031554-e3534c89ef96/go.mod h1:mmIfjCSQlGYXmJ95jFN84AkQFnVABtKuJL8IrzwvUKQ= github.com/lunny/dingtalk_webhook v0.0.0-20171025031554-e3534c89ef96/go.mod h1:mmIfjCSQlGYXmJ95jFN84AkQFnVABtKuJL8IrzwvUKQ=
github.com/lunny/log v0.0.0-20160921050905-7887c61bf0de h1:nyxwRdWHAVxpFcDThedEgQ07DbcRc5xgNObtbTp76fk= github.com/lunny/log v0.0.0-20160921050905-7887c61bf0de h1:nyxwRdWHAVxpFcDThedEgQ07DbcRc5xgNObtbTp76fk=
@@ -651,6 +649,8 @@ github.com/mgechev/revive v1.0.3-0.20200921231451-246eac737dc7 h1:ydVkpU/M4/c45y
github.com/mgechev/revive v1.0.3-0.20200921231451-246eac737dc7/go.mod h1:no/hfevHbndpXR5CaJahkYCfM/FFpmM/dSOwFGU7Z1o= github.com/mgechev/revive v1.0.3-0.20200921231451-246eac737dc7/go.mod h1:no/hfevHbndpXR5CaJahkYCfM/FFpmM/dSOwFGU7Z1o=
github.com/mholt/archiver/v3 v3.3.0 h1:vWjhY8SQp5yzM9P6OJ/eZEkmi3UAbRrxCq48MxjAzig= github.com/mholt/archiver/v3 v3.3.0 h1:vWjhY8SQp5yzM9P6OJ/eZEkmi3UAbRrxCq48MxjAzig=
github.com/mholt/archiver/v3 v3.3.0/go.mod h1:YnQtqsp+94Rwd0D/rk5cnLrxusUBUXg+08Ebtr1Mqao= github.com/mholt/archiver/v3 v3.3.0/go.mod h1:YnQtqsp+94Rwd0D/rk5cnLrxusUBUXg+08Ebtr1Mqao=
github.com/microcosm-cc/bluemonday v1.0.3-0.20191119130333-0a75d7616912 h1:hJde9rA24hlTcAYSwJoXpDUyGtfKQ/jsofw+WaDqGrI=
github.com/microcosm-cc/bluemonday v1.0.3-0.20191119130333-0a75d7616912/go.mod h1:8iwZnFn2CDDNZ0r6UXhF4xawGvzaqzCRa1n3/lO3W2w=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg= github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/minio/md5-simd v1.1.0 h1:QPfiOqlZH+Cj9teu0t9b1nTBfPbyTl16Of5MeuShdK4= github.com/minio/md5-simd v1.1.0 h1:QPfiOqlZH+Cj9teu0t9b1nTBfPbyTl16Of5MeuShdK4=
github.com/minio/md5-simd v1.1.0/go.mod h1:XpBqgZULrMYD3R+M28PcmP0CkI7PEMzB3U77ZrKZ0Gw= github.com/minio/md5-simd v1.1.0/go.mod h1:XpBqgZULrMYD3R+M28PcmP0CkI7PEMzB3U77ZrKZ0Gw=
@@ -768,7 +768,6 @@ github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6So
github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.2.2/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.2.2/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.5.2/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc= github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ= github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/rs/zerolog v1.13.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU= github.com/rs/zerolog v1.13.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU=
@@ -937,9 +936,8 @@ golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20200323165209-0ec3e9974c59/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200323165209-0ec3e9974c59/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200709230013-948cd5f35899/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200709230013-948cd5f35899/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a h1:vclmkQCjlDX5OydZ9wv8rBCcS0QyQY66Mpf/7BZbInM=
golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201217014255-9d1352758620 h1:3wPMTskHO3+O6jqTEXyFcsnuxMQOqYSaHsDxcbUXpqA=
golang.org/x/crypto v0.0.0-20201217014255-9d1352758620/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -1054,8 +1052,6 @@ golang.org/x/sys v0.0.0-20200413165638-669c56c373c4/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200918174421-af09f7315aff h1:1CPUrky56AcgSpxz/KfgzQWzfG09u5YOL8MvPYBlrL8= golang.org/x/sys v0.0.0-20200918174421-af09f7315aff h1:1CPUrky56AcgSpxz/KfgzQWzfG09u5YOL8MvPYBlrL8=
golang.org/x/sys v0.0.0-20200918174421-af09f7315aff/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200918174421-af09f7315aff/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221 h1:/ZHdbVpdR/jk3g30/d4yUL0JU9kksj8+F/bnQUVLGDM=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
@@ -1200,8 +1196,8 @@ honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWh
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
mvdan.cc/xurls/v2 v2.2.0 h1:NSZPykBXJFCetGZykLAxaL6SIpvbVy/UFEniIfHAa8A= mvdan.cc/xurls/v2 v2.1.0 h1:KaMb5GLhlcSX+e+qhbRJODnUUBvlw01jt4yrjFIHAuA=
mvdan.cc/xurls/v2 v2.2.0/go.mod h1:EV1RMtya9D6G5DMYPGD8zTQzaHet6Jh8gFlRgGRJeO8= mvdan.cc/xurls/v2 v2.1.0/go.mod h1:5GrSd9rOnKOpZaji1OZLYL/yeAAtGDlo/cFe+8K5n8E=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251 h1:mUcz5b3FJbP5Cvdq7Khzn6J9OCUQJaBwgBkCR+MOwSs= strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251 h1:mUcz5b3FJbP5Cvdq7Khzn6J9OCUQJaBwgBkCR+MOwSs=
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251/go.mod h1:FJGmPh3vz9jSos1L/F91iAgnC/aejc0wIIrF2ZwJxdY= strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251/go.mod h1:FJGmPh3vz9jSos1L/F91iAgnC/aejc0wIIrF2ZwJxdY=

View File

@@ -144,22 +144,3 @@ func TestAPIListUsersNonAdmin(t *testing.T) {
req := NewRequestf(t, "GET", "/api/v1/admin/users?token=%s", token) req := NewRequestf(t, "GET", "/api/v1/admin/users?token=%s", token)
session.MakeRequest(t, req, http.StatusForbidden) session.MakeRequest(t, req, http.StatusForbidden)
} }
func TestAPICreateUserInvalidEmail(t *testing.T) {
defer prepareTestEnv(t)()
adminUsername := "user1"
session := loginUser(t, adminUsername)
token := getTokenForLoggedInUser(t, session)
urlStr := fmt.Sprintf("/api/v1/admin/users?token=%s", token)
req := NewRequestWithValues(t, "POST", urlStr, map[string]string{
"email": "invalid_email@domain.com\r\n",
"full_name": "invalid user",
"login_name": "invalidUser",
"must_change_password": "true",
"password": "password",
"send_notify": "true",
"source_id": "0",
"username": "invalidUser",
})
session.MakeRequest(t, req, http.StatusUnprocessableEntity)
}

View File

@@ -5,17 +5,14 @@
package integrations package integrations
import ( import (
"context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"testing" "testing"
"time"
"code.gitea.io/gitea/models" "code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/auth" "code.gitea.io/gitea/modules/auth"
"code.gitea.io/gitea/modules/queue"
api "code.gitea.io/gitea/modules/structs" api "code.gitea.io/gitea/modules/structs"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -228,29 +225,11 @@ func doAPIMergePullRequest(ctx APITestContext, owner, repo string, index int64)
Do: string(models.MergeStyleMerge), Do: string(models.MergeStyleMerge),
}) })
resp := ctx.Session.MakeRequest(t, req, NoExpectedStatus) if ctx.ExpectedCode != 0 {
ctx.Session.MakeRequest(t, req, ctx.ExpectedCode)
if resp.Code == http.StatusMethodNotAllowed { return
err := api.APIError{}
DecodeJSON(t, resp, &err)
assert.EqualValues(t, "Please try again later", err.Message)
queue.GetManager().FlushAll(context.Background(), 5*time.Second)
req = NewRequestWithJSON(t, http.MethodPost, urlStr, &auth.MergePullRequestForm{
MergeMessageField: "doAPIMergePullRequest Merge",
Do: string(models.MergeStyleMerge),
})
resp = ctx.Session.MakeRequest(t, req, NoExpectedStatus)
}
expected := ctx.ExpectedCode
if expected == 0 {
expected = 200
}
if !assert.EqualValues(t, expected, resp.Code,
"Request: %s %s", req.Method, req.URL.String()) {
logUnexpectedResponse(t, resp)
} }
ctx.Session.MakeRequest(t, req, 200)
} }
} }

View File

@@ -309,8 +309,6 @@ func TestAPIRepoMigrate(t *testing.T) {
{ctxUserID: 2, userID: 1, cloneURL: "https://github.com/go-gitea/test_repo.git", repoName: "git-bad", expectedStatus: http.StatusForbidden}, {ctxUserID: 2, userID: 1, cloneURL: "https://github.com/go-gitea/test_repo.git", repoName: "git-bad", expectedStatus: http.StatusForbidden},
{ctxUserID: 2, userID: 3, cloneURL: "https://github.com/go-gitea/test_repo.git", repoName: "git-org", expectedStatus: http.StatusCreated}, {ctxUserID: 2, userID: 3, cloneURL: "https://github.com/go-gitea/test_repo.git", repoName: "git-org", expectedStatus: http.StatusCreated},
{ctxUserID: 2, userID: 6, cloneURL: "https://github.com/go-gitea/test_repo.git", repoName: "git-bad-org", expectedStatus: http.StatusForbidden}, {ctxUserID: 2, userID: 6, cloneURL: "https://github.com/go-gitea/test_repo.git", repoName: "git-bad-org", expectedStatus: http.StatusForbidden},
{ctxUserID: 2, userID: 3, cloneURL: "https://localhost:3000/user/test_repo.git", repoName: "local-ip", expectedStatus: http.StatusUnprocessableEntity},
{ctxUserID: 2, userID: 3, cloneURL: "https://10.0.0.1/user/test_repo.git", repoName: "private-ip", expectedStatus: http.StatusUnprocessableEntity},
} }
defer prepareTestEnv(t)() defer prepareTestEnv(t)()
@@ -327,16 +325,8 @@ func TestAPIRepoMigrate(t *testing.T) {
if resp.Code == http.StatusUnprocessableEntity { if resp.Code == http.StatusUnprocessableEntity {
respJSON := map[string]string{} respJSON := map[string]string{}
DecodeJSON(t, resp, &respJSON) DecodeJSON(t, resp, &respJSON)
switch respJSON["message"] { if assert.Equal(t, respJSON["message"], "Remote visit addressed rate limitation.") {
case "Remote visit addressed rate limitation.":
t.Log("test hit github rate limitation") t.Log("test hit github rate limitation")
case "migrate from '10.0.0.1' is not allowed: the host resolve to a private ip address '10.0.0.1'":
assert.EqualValues(t, "private-ip", testCase.repoName)
case "migrate from 'localhost:3000' is not allowed: the host resolve to a private ip address '::1'",
"migrate from 'localhost:3000' is not allowed: the host resolve to a private ip address '127.0.0.1'":
assert.EqualValues(t, "local-ip", testCase.repoName)
default:
t.Errorf("unexpected error '%v' on url '%s'", respJSON["message"], testCase.cloneURL)
} }
} else { } else {
assert.EqualValues(t, testCase.expectedStatus, resp.Code) assert.EqualValues(t, testCase.expectedStatus, resp.Code)

View File

@@ -26,7 +26,7 @@ func TestUserHeatmap(t *testing.T) {
var heatmap []*models.UserHeatmapData var heatmap []*models.UserHeatmapData
DecodeJSON(t, resp, &heatmap) DecodeJSON(t, resp, &heatmap)
var dummyheatmap []*models.UserHeatmapData var dummyheatmap []*models.UserHeatmapData
dummyheatmap = append(dummyheatmap, &models.UserHeatmapData{Timestamp: 1603152000, Contributions: 1}) dummyheatmap = append(dummyheatmap, &models.UserHeatmapData{Timestamp: 1571616000, Contributions: 1})
assert.Equal(t, dummyheatmap, heatmap) assert.Equal(t, dummyheatmap, heatmap)
} }

View File

@@ -141,7 +141,7 @@ func TestLDAPUserSignin(t *testing.T) {
assert.Equal(t, u.UserName, htmlDoc.GetInputValueByName("name")) assert.Equal(t, u.UserName, htmlDoc.GetInputValueByName("name"))
assert.Equal(t, u.FullName, htmlDoc.GetInputValueByName("full_name")) assert.Equal(t, u.FullName, htmlDoc.GetInputValueByName("full_name"))
assert.Equal(t, u.Email, htmlDoc.Find(`label[for="email"]`).Siblings().First().Text()) assert.Equal(t, u.Email, htmlDoc.GetInputValueByName("email"))
} }
func TestLDAPUserSync(t *testing.T) { func TestLDAPUserSync(t *testing.T) {

View File

@@ -111,7 +111,7 @@ func onGiteaRun(t *testing.T, callback func(*testing.T, *url.URL), prepare ...bo
func doGitClone(dstLocalPath string, u *url.URL) func(*testing.T) { func doGitClone(dstLocalPath string, u *url.URL) func(*testing.T) {
return func(t *testing.T) { return func(t *testing.T) {
assert.NoError(t, git.CloneWithArgs(context.Background(), u.String(), dstLocalPath, allowLFSFilters(), git.CloneRepoOptions{})) assert.NoError(t, git.CloneWithArgs(u.String(), dstLocalPath, allowLFSFilters(), git.CloneRepoOptions{}))
assert.True(t, com.IsExist(filepath.Join(dstLocalPath, "README.md"))) assert.True(t, com.IsExist(filepath.Join(dstLocalPath, "README.md")))
} }
} }

View File

@@ -37,13 +37,6 @@ func (doc *HTMLDoc) GetInputValueByName(name string) string {
return text return text
} }
// Find gets the descendants of each element in the current set of
// matched elements, filtered by a selector. It returns a new Selection
// object containing these matched elements.
func (doc *HTMLDoc) Find(selector string) *goquery.Selection {
return doc.doc.Find(selector)
}
// GetCSRF for get CSRC token value from input // GetCSRF for get CSRC token value from input
func (doc *HTMLDoc) GetCSRF() string { func (doc *HTMLDoc) GetCSRF() string {
return doc.GetInputValueByName("_csrf") return doc.GetInputValueByName("_csrf")

View File

@@ -11,6 +11,7 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"log"
"net/http" "net/http"
"net/http/cookiejar" "net/http/cookiejar"
"net/http/httptest" "net/http/httptest"
@@ -26,10 +27,8 @@ import (
"code.gitea.io/gitea/models" "code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/base" "code.gitea.io/gitea/modules/base"
"code.gitea.io/gitea/modules/graceful" "code.gitea.io/gitea/modules/graceful"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/queue" "code.gitea.io/gitea/modules/queue"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/storage"
"code.gitea.io/gitea/modules/util" "code.gitea.io/gitea/modules/util"
"code.gitea.io/gitea/routers" "code.gitea.io/gitea/routers"
"code.gitea.io/gitea/routers/routes" "code.gitea.io/gitea/routers/routes"
@@ -60,8 +59,6 @@ func NewNilResponseRecorder() *NilResponseRecorder {
} }
func TestMain(m *testing.M) { func TestMain(m *testing.M) {
defer log.Close()
managerCtx, cancel := context.WithCancel(context.Background()) managerCtx, cancel := context.WithCancel(context.Background())
graceful.InitManager(managerCtx) graceful.InitManager(managerCtx)
defer cancel() defer cancel()
@@ -145,10 +142,6 @@ func initIntegrationTest() {
util.RemoveAll(models.LocalCopyPath()) util.RemoveAll(models.LocalCopyPath())
setting.CheckLFSVersion() setting.CheckLFSVersion()
setting.InitDBConfig() setting.InitDBConfig()
if err := storage.Init(); err != nil {
fmt.Printf("Init storage failed: %v", err)
os.Exit(1)
}
switch { switch {
case setting.Database.UseMySQL: case setting.Database.UseMySQL:
@@ -156,27 +149,27 @@ func initIntegrationTest() {
setting.Database.User, setting.Database.Passwd, setting.Database.Host)) setting.Database.User, setting.Database.Passwd, setting.Database.Host))
defer db.Close() defer db.Close()
if err != nil { if err != nil {
log.Fatal("sql.Open: %v", err) log.Fatalf("sql.Open: %v", err)
} }
if _, err = db.Exec(fmt.Sprintf("CREATE DATABASE IF NOT EXISTS %s", setting.Database.Name)); err != nil { if _, err = db.Exec(fmt.Sprintf("CREATE DATABASE IF NOT EXISTS %s", setting.Database.Name)); err != nil {
log.Fatal("db.Exec: %v", err) log.Fatalf("db.Exec: %v", err)
} }
case setting.Database.UsePostgreSQL: case setting.Database.UsePostgreSQL:
db, err := sql.Open("postgres", fmt.Sprintf("postgres://%s:%s@%s/?sslmode=%s", db, err := sql.Open("postgres", fmt.Sprintf("postgres://%s:%s@%s/?sslmode=%s",
setting.Database.User, setting.Database.Passwd, setting.Database.Host, setting.Database.SSLMode)) setting.Database.User, setting.Database.Passwd, setting.Database.Host, setting.Database.SSLMode))
defer db.Close() defer db.Close()
if err != nil { if err != nil {
log.Fatal("sql.Open: %v", err) log.Fatalf("sql.Open: %v", err)
} }
dbrows, err := db.Query(fmt.Sprintf("SELECT 1 FROM pg_database WHERE datname = '%s'", setting.Database.Name)) dbrows, err := db.Query(fmt.Sprintf("SELECT 1 FROM pg_database WHERE datname = '%s'", setting.Database.Name))
if err != nil { if err != nil {
log.Fatal("db.Query: %v", err) log.Fatalf("db.Query: %v", err)
} }
defer dbrows.Close() defer dbrows.Close()
if !dbrows.Next() { if !dbrows.Next() {
if _, err = db.Exec(fmt.Sprintf("CREATE DATABASE %s", setting.Database.Name)); err != nil { if _, err = db.Exec(fmt.Sprintf("CREATE DATABASE %s", setting.Database.Name)); err != nil {
log.Fatal("db.Exec: CREATE DATABASE: %v", err) log.Fatalf("db.Exec: CREATE DATABASE: %v", err)
} }
} }
// Check if we need to setup a specific schema // Check if we need to setup a specific schema
@@ -190,18 +183,18 @@ func initIntegrationTest() {
// This is a different db object; requires a different Close() // This is a different db object; requires a different Close()
defer db.Close() defer db.Close()
if err != nil { if err != nil {
log.Fatal("sql.Open: %v", err) log.Fatalf("sql.Open: %v", err)
} }
schrows, err := db.Query(fmt.Sprintf("SELECT 1 FROM information_schema.schemata WHERE schema_name = '%s'", setting.Database.Schema)) schrows, err := db.Query(fmt.Sprintf("SELECT 1 FROM information_schema.schemata WHERE schema_name = '%s'", setting.Database.Schema))
if err != nil { if err != nil {
log.Fatal("db.Query: %v", err) log.Fatalf("db.Query: %v", err)
} }
defer schrows.Close() defer schrows.Close()
if !schrows.Next() { if !schrows.Next() {
// Create and setup a DB schema // Create and setup a DB schema
if _, err = db.Exec(fmt.Sprintf("CREATE SCHEMA %s", setting.Database.Schema)); err != nil { if _, err = db.Exec(fmt.Sprintf("CREATE SCHEMA %s", setting.Database.Schema)); err != nil {
log.Fatal("db.Exec: CREATE SCHEMA: %v", err) log.Fatalf("db.Exec: CREATE SCHEMA: %v", err)
} }
} }
@@ -210,10 +203,10 @@ func initIntegrationTest() {
db, err := sql.Open("mssql", fmt.Sprintf("server=%s; port=%s; database=%s; user id=%s; password=%s;", db, err := sql.Open("mssql", fmt.Sprintf("server=%s; port=%s; database=%s; user id=%s; password=%s;",
host, port, "master", setting.Database.User, setting.Database.Passwd)) host, port, "master", setting.Database.User, setting.Database.Passwd))
if err != nil { if err != nil {
log.Fatal("sql.Open: %v", err) log.Fatalf("sql.Open: %v", err)
} }
if _, err := db.Exec(fmt.Sprintf("If(db_id(N'%s') IS NULL) BEGIN CREATE DATABASE %s; END;", setting.Database.Name, setting.Database.Name)); err != nil { if _, err := db.Exec(fmt.Sprintf("If(db_id(N'%s') IS NULL) BEGIN CREATE DATABASE %s; END;", setting.Database.Name, setting.Database.Name)); err != nil {
log.Fatal("db.Exec: %v", err) log.Fatalf("db.Exec: %v", err)
} }
defer db.Close() defer db.Close()
} }

View File

@@ -78,7 +78,6 @@ func storeAndGetLfs(t *testing.T, content *[]byte, extraHeader *http.Header, exp
} }
} }
} }
resp := session.MakeRequest(t, req, expectedStatus) resp := session.MakeRequest(t, req, expectedStatus)
return resp return resp
@@ -211,7 +210,7 @@ func TestGetLFSRange(t *testing.T) {
{"bytes=0-10", "123456789\n", http.StatusPartialContent}, {"bytes=0-10", "123456789\n", http.StatusPartialContent},
// end-range bigger than length-1 is ignored // end-range bigger than length-1 is ignored
{"bytes=0-11", "123456789\n", http.StatusPartialContent}, {"bytes=0-11", "123456789\n", http.StatusPartialContent},
{"bytes=11-", "Requested Range Not Satisfiable", http.StatusRequestedRangeNotSatisfiable}, {"bytes=11-", "", http.StatusPartialContent},
// incorrect header value cause whole header to be ignored // incorrect header value cause whole header to be ignored
{"bytes=-", "123456789\n", http.StatusOK}, {"bytes=-", "123456789\n", http.StatusOK},
{"foobar", "123456789\n", http.StatusOK}, {"foobar", "123456789\n", http.StatusOK},

View File

@@ -45,21 +45,19 @@ START_SSH_SERVER = true
OFFLINE_MODE = false OFFLINE_MODE = false
LFS_START_SERVER = true LFS_START_SERVER = true
LFS_CONTENT_PATH = integrations/gitea-integration-mysql/datalfs-mysql
LFS_JWT_SECRET = Tv_MjmZuHqpIY6GFl12ebgkRAMt4RlWt0v4EHKSXO0w LFS_JWT_SECRET = Tv_MjmZuHqpIY6GFl12ebgkRAMt4RlWt0v4EHKSXO0w
LFS_STORE_TYPE = minio
[lfs] LFS_SERVE_DIRECT = false
MINIO_BASE_PATH = lfs/ LFS_MINIO_ENDPOINT = minio:9000
LFS_MINIO_ACCESS_KEY_ID = 123456
LFS_MINIO_SECRET_ACCESS_KEY = 12345678
LFS_MINIO_BUCKET = gitea
LFS_MINIO_LOCATION = us-east-1
LFS_MINIO_BASE_PATH = lfs/
LFS_MINIO_USE_SSL = false
[attachment] [attachment]
MINIO_BASE_PATH = attachments/
[avatars]
MINIO_BASE_PATH = avatars/
[repo-avatars]
MINIO_BASE_PATH = repo-avatars/
[storage]
STORAGE_TYPE = minio STORAGE_TYPE = minio
SERVE_DIRECT = false SERVE_DIRECT = false
MINIO_ENDPOINT = minio:9000 MINIO_ENDPOINT = minio:9000
@@ -67,6 +65,7 @@ MINIO_ACCESS_KEY_ID = 123456
MINIO_SECRET_ACCESS_KEY = 12345678 MINIO_SECRET_ACCESS_KEY = 12345678
MINIO_BUCKET = gitea MINIO_BUCKET = gitea
MINIO_LOCATION = us-east-1 MINIO_LOCATION = us-east-1
MINIO_BASE_PATH = attachments/
MINIO_USE_SSL = false MINIO_USE_SSL = false
[mailer] [mailer]
@@ -89,6 +88,9 @@ ENABLE_NOTIFY_MAIL = true
DISABLE_GRAVATAR = false DISABLE_GRAVATAR = false
ENABLE_FEDERATED_AVATAR = false ENABLE_FEDERATED_AVATAR = false
AVATAR_UPLOAD_PATH = integrations/gitea-integration-mysql/data/avatars
REPOSITORY_AVATAR_UPLOAD_PATH = integrations/gitea-integration-mysql/data/repo-avatars
[session] [session]
PROVIDER = file PROVIDER = file
PROVIDER_CONFIG = integrations/gitea-integration-mysql/data/sessions PROVIDER_CONFIG = integrations/gitea-integration-mysql/data/sessions

View File

@@ -5,14 +5,10 @@
package integrations package integrations
import ( import (
"fmt"
"net/http" "net/http"
"strings"
"testing" "testing"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
"github.com/stretchr/testify/assert"
"github.com/unknwon/i18n"
) )
func TestSignup(t *testing.T) { func TestSignup(t *testing.T) {
@@ -32,37 +28,3 @@ func TestSignup(t *testing.T) {
req = NewRequest(t, "GET", "/exampleUser") req = NewRequest(t, "GET", "/exampleUser")
MakeRequest(t, req, http.StatusOK) MakeRequest(t, req, http.StatusOK)
} }
func TestSignupEmail(t *testing.T) {
defer prepareTestEnv(t)()
setting.Service.EnableCaptcha = false
tests := []struct {
email string
wantStatus int
wantMsg string
}{
{"exampleUser@example.com\r\n", http.StatusOK, i18n.Tr("en", "form.email_invalid", nil)},
{"exampleUser@example.com\r", http.StatusOK, i18n.Tr("en", "form.email_invalid", nil)},
{"exampleUser@example.com\n", http.StatusOK, i18n.Tr("en", "form.email_invalid", nil)},
{"exampleUser@example.com", http.StatusFound, ""},
}
for i, test := range tests {
req := NewRequestWithValues(t, "POST", "/user/sign_up", map[string]string{
"user_name": fmt.Sprintf("exampleUser%d", i),
"email": test.email,
"password": "examplePassword!1",
"retype": "examplePassword!1",
})
resp := MakeRequest(t, req, test.wantStatus)
if test.wantMsg != "" {
htmlDoc := NewHTMLParser(t, resp.Body)
assert.Equal(t,
test.wantMsg,
strings.TrimSpace(htmlDoc.doc.Find(".ui.message").Text()),
)
}
}
}

View File

@@ -13,7 +13,6 @@ import (
"time" "time"
"code.gitea.io/gitea/modules/base" "code.gitea.io/gitea/modules/base"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log" "code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/timeutil" "code.gitea.io/gitea/modules/timeutil"
@@ -244,7 +243,7 @@ func (a *Action) getCommentLink(e Engine) string {
// GetBranch returns the action's repository branch. // GetBranch returns the action's repository branch.
func (a *Action) GetBranch() string { func (a *Action) GetBranch() string {
return strings.TrimPrefix(a.RefName, git.BranchPrefix) return a.RefName
} }
// GetContent returns the action's content. // GetContent returns the action's content.

View File

@@ -77,7 +77,7 @@ func removeStorageWithNotice(e Engine, bucket storage.ObjectStorage, title, path
if err := bucket.Delete(path); err != nil { if err := bucket.Delete(path); err != nil {
desc := fmt.Sprintf("%s [%s]: %v", title, path, err) desc := fmt.Sprintf("%s [%s]: %v", title, path, err)
log.Warn(title+" [%s]: %v", path, err) log.Warn(title+" [%s]: %v", path, err)
if err = createNotice(e, NoticeRepository, desc); err != nil { if err = createNotice(x, NoticeRepository, desc); err != nil {
log.Error("CreateRepositoryNotice: %v", err) log.Error("CreateRepositoryNotice: %v", err)
} }
} }

View File

@@ -193,21 +193,6 @@ func (err ErrEmailAlreadyUsed) Error() string {
return fmt.Sprintf("e-mail already in use [email: %s]", err.Email) return fmt.Sprintf("e-mail already in use [email: %s]", err.Email)
} }
// ErrEmailInvalid represents an error where the email address does not comply with RFC 5322
type ErrEmailInvalid struct {
Email string
}
// IsErrEmailInvalid checks if an error is an ErrEmailInvalid
func IsErrEmailInvalid(err error) bool {
_, ok := err.(ErrEmailInvalid)
return ok
}
func (err ErrEmailInvalid) Error() string {
return fmt.Sprintf("e-mail invalid [email: %s]", err.Email)
}
// ErrOpenIDAlreadyUsed represents a "OpenIDAlreadyUsed" kind of error. // ErrOpenIDAlreadyUsed represents a "OpenIDAlreadyUsed" kind of error.
type ErrOpenIDAlreadyUsed struct { type ErrOpenIDAlreadyUsed struct {
OpenID string OpenID string
@@ -1019,29 +1004,6 @@ func IsErrWontSign(err error) bool {
return ok return ok
} }
// ErrMigrationNotAllowed explains why a migration from an url is not allowed
type ErrMigrationNotAllowed struct {
Host string
NotResolvedIP bool
PrivateNet string
}
func (e *ErrMigrationNotAllowed) Error() string {
if e.NotResolvedIP {
return fmt.Sprintf("migrate from '%s' is not allowed: unknown hostname", e.Host)
}
if len(e.PrivateNet) != 0 {
return fmt.Sprintf("migrate from '%s' is not allowed: the host resolve to a private ip address '%s'", e.Host, e.PrivateNet)
}
return fmt.Sprintf("migrate from '%s is not allowed'", e.Host)
}
// IsErrMigrationNotAllowed checks if an error is a ErrMigrationNotAllowed
func IsErrMigrationNotAllowed(err error) bool {
_, ok := err.(*ErrMigrationNotAllowed)
return ok
}
// __________ .__ // __________ .__
// \______ \____________ ____ ____ | |__ // \______ \____________ ____ ____ | |__
// | | _/\_ __ \__ \ / \_/ ___\| | \ // | | _/\_ __ \__ \ / \_/ ___\| | \
@@ -2041,7 +2003,7 @@ type ErrNotValidReviewRequest struct {
// IsErrNotValidReviewRequest checks if an error is a ErrNotValidReviewRequest. // IsErrNotValidReviewRequest checks if an error is a ErrNotValidReviewRequest.
func IsErrNotValidReviewRequest(err error) bool { func IsErrNotValidReviewRequest(err error) bool {
_, ok := err.(ErrNotValidReviewRequest) _, ok := err.(ErrReviewNotExist)
return ok return ok
} }

View File

@@ -5,7 +5,7 @@
act_user_id: 2 act_user_id: 2
repo_id: 2 repo_id: 2
is_private: true is_private: true
created_unix: 1603228283 created_unix: 1571686356
- -
id: 2 id: 2

View File

@@ -14,7 +14,6 @@ import (
"code.gitea.io/gitea/modules/base" "code.gitea.io/gitea/modules/base"
"code.gitea.io/gitea/modules/log" "code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/references"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/structs" "code.gitea.io/gitea/modules/structs"
api "code.gitea.io/gitea/modules/structs" api "code.gitea.io/gitea/modules/structs"
@@ -1492,7 +1491,6 @@ type UserIssueStatsOptions struct {
IsPull bool IsPull bool
IsClosed bool IsClosed bool
IssueIDs []int64 IssueIDs []int64
LabelIDs []int64
} }
// GetUserIssueStats returns issue statistic information for dashboard by given conditions. // GetUserIssueStats returns issue statistic information for dashboard by given conditions.
@@ -1509,38 +1507,29 @@ func GetUserIssueStats(opts UserIssueStatsOptions) (*IssueStats, error) {
cond = cond.And(builder.In("issue.id", opts.IssueIDs)) cond = cond.And(builder.In("issue.id", opts.IssueIDs))
} }
sess := func(cond builder.Cond) *xorm.Session {
s := x.Where(cond)
if len(opts.LabelIDs) > 0 {
s.Join("INNER", "issue_label", "issue_label.issue_id = issue.id").
In("issue_label.label_id", opts.LabelIDs)
}
return s
}
switch opts.FilterMode { switch opts.FilterMode {
case FilterModeAll: case FilterModeAll:
stats.OpenCount, err = sess(cond).And("issue.is_closed = ?", false). stats.OpenCount, err = x.Where(cond).And("issue.is_closed = ?", false).
And(builder.In("issue.repo_id", opts.UserRepoIDs)). And(builder.In("issue.repo_id", opts.UserRepoIDs)).
Count(new(Issue)) Count(new(Issue))
if err != nil { if err != nil {
return nil, err return nil, err
} }
stats.ClosedCount, err = sess(cond).And("issue.is_closed = ?", true). stats.ClosedCount, err = x.Where(cond).And("issue.is_closed = ?", true).
And(builder.In("issue.repo_id", opts.UserRepoIDs)). And(builder.In("issue.repo_id", opts.UserRepoIDs)).
Count(new(Issue)) Count(new(Issue))
if err != nil { if err != nil {
return nil, err return nil, err
} }
case FilterModeAssign: case FilterModeAssign:
stats.OpenCount, err = sess(cond).And("issue.is_closed = ?", false). stats.OpenCount, err = x.Where(cond).And("issue.is_closed = ?", false).
Join("INNER", "issue_assignees", "issue.id = issue_assignees.issue_id"). Join("INNER", "issue_assignees", "issue.id = issue_assignees.issue_id").
And("issue_assignees.assignee_id = ?", opts.UserID). And("issue_assignees.assignee_id = ?", opts.UserID).
Count(new(Issue)) Count(new(Issue))
if err != nil { if err != nil {
return nil, err return nil, err
} }
stats.ClosedCount, err = sess(cond).And("issue.is_closed = ?", true). stats.ClosedCount, err = x.Where(cond).And("issue.is_closed = ?", true).
Join("INNER", "issue_assignees", "issue.id = issue_assignees.issue_id"). Join("INNER", "issue_assignees", "issue.id = issue_assignees.issue_id").
And("issue_assignees.assignee_id = ?", opts.UserID). And("issue_assignees.assignee_id = ?", opts.UserID).
Count(new(Issue)) Count(new(Issue))
@@ -1548,27 +1537,27 @@ func GetUserIssueStats(opts UserIssueStatsOptions) (*IssueStats, error) {
return nil, err return nil, err
} }
case FilterModeCreate: case FilterModeCreate:
stats.OpenCount, err = sess(cond).And("issue.is_closed = ?", false). stats.OpenCount, err = x.Where(cond).And("issue.is_closed = ?", false).
And("issue.poster_id = ?", opts.UserID). And("issue.poster_id = ?", opts.UserID).
Count(new(Issue)) Count(new(Issue))
if err != nil { if err != nil {
return nil, err return nil, err
} }
stats.ClosedCount, err = sess(cond).And("issue.is_closed = ?", true). stats.ClosedCount, err = x.Where(cond).And("issue.is_closed = ?", true).
And("issue.poster_id = ?", opts.UserID). And("issue.poster_id = ?", opts.UserID).
Count(new(Issue)) Count(new(Issue))
if err != nil { if err != nil {
return nil, err return nil, err
} }
case FilterModeMention: case FilterModeMention:
stats.OpenCount, err = sess(cond).And("issue.is_closed = ?", false). stats.OpenCount, err = x.Where(cond).And("issue.is_closed = ?", false).
Join("INNER", "issue_user", "issue.id = issue_user.issue_id and issue_user.is_mentioned = ?", true). Join("INNER", "issue_user", "issue.id = issue_user.issue_id and issue_user.is_mentioned = ?", true).
And("issue_user.uid = ?", opts.UserID). And("issue_user.uid = ?", opts.UserID).
Count(new(Issue)) Count(new(Issue))
if err != nil { if err != nil {
return nil, err return nil, err
} }
stats.ClosedCount, err = sess(cond).And("issue.is_closed = ?", true). stats.ClosedCount, err = x.Where(cond).And("issue.is_closed = ?", true).
Join("INNER", "issue_user", "issue.id = issue_user.issue_id and issue_user.is_mentioned = ?", true). Join("INNER", "issue_user", "issue.id = issue_user.issue_id and issue_user.is_mentioned = ?", true).
And("issue_user.uid = ?", opts.UserID). And("issue_user.uid = ?", opts.UserID).
Count(new(Issue)) Count(new(Issue))
@@ -1578,7 +1567,7 @@ func GetUserIssueStats(opts UserIssueStatsOptions) (*IssueStats, error) {
} }
cond = cond.And(builder.Eq{"issue.is_closed": opts.IsClosed}) cond = cond.And(builder.Eq{"issue.is_closed": opts.IsClosed})
stats.AssignCount, err = sess(cond). stats.AssignCount, err = x.Where(cond).
Join("INNER", "issue_assignees", "issue.id = issue_assignees.issue_id"). Join("INNER", "issue_assignees", "issue.id = issue_assignees.issue_id").
And("issue_assignees.assignee_id = ?", opts.UserID). And("issue_assignees.assignee_id = ?", opts.UserID).
Count(new(Issue)) Count(new(Issue))
@@ -1586,14 +1575,14 @@ func GetUserIssueStats(opts UserIssueStatsOptions) (*IssueStats, error) {
return nil, err return nil, err
} }
stats.CreateCount, err = sess(cond). stats.CreateCount, err = x.Where(cond).
And("poster_id = ?", opts.UserID). And("poster_id = ?", opts.UserID).
Count(new(Issue)) Count(new(Issue))
if err != nil { if err != nil {
return nil, err return nil, err
} }
stats.MentionCount, err = sess(cond). stats.MentionCount, err = x.Where(cond).
Join("INNER", "issue_user", "issue.id = issue_user.issue_id and issue_user.is_mentioned = ?", true). Join("INNER", "issue_user", "issue.id = issue_user.issue_id and issue_user.is_mentioned = ?", true).
And("issue_user.uid = ?", opts.UserID). And("issue_user.uid = ?", opts.UserID).
Count(new(Issue)) Count(new(Issue))
@@ -1601,7 +1590,7 @@ func GetUserIssueStats(opts UserIssueStatsOptions) (*IssueStats, error) {
return nil, err return nil, err
} }
stats.YourRepositoriesCount, err = sess(cond). stats.YourRepositoriesCount, err = x.Where(cond).
And(builder.In("issue.repo_id", opts.UserRepoIDs)). And(builder.In("issue.repo_id", opts.UserRepoIDs)).
Count(new(Issue)) Count(new(Issue))
if err != nil { if err != nil {
@@ -1840,19 +1829,6 @@ func (issue *Issue) updateClosedNum(e Engine) (err error) {
return return
} }
// FindAndUpdateIssueMentions finds users mentioned in the given content string, and saves them in the database.
func (issue *Issue) FindAndUpdateIssueMentions(ctx DBContext, doer *User, content string) (mentions []*User, err error) {
rawMentions := references.FindAllMentionsMarkdown(content)
mentions, err = issue.ResolveMentionsByVisibility(ctx, doer, rawMentions)
if err != nil {
return nil, fmt.Errorf("UpdateIssueMentions [%d]: %v", issue.ID, err)
}
if err = UpdateIssueMentions(ctx, issue.ID, mentions); err != nil {
return nil, fmt.Errorf("UpdateIssueMentions [%d]: %v", issue.ID, err)
}
return
}
// ResolveMentionsByVisibility returns the users mentioned in an issue, removing those that // ResolveMentionsByVisibility returns the users mentioned in an issue, removing those that
// don't have access to reading it. Teams are expanded into their users, but organizations are ignored. // don't have access to reading it. Teams are expanded into their users, but organizations are ignored.
func (issue *Issue) ResolveMentionsByVisibility(ctx DBContext, doer *User, mentions []string) (users []*User, err error) { func (issue *Issue) ResolveMentionsByVisibility(ctx DBContext, doer *User, mentions []string) (users []*User, err error) {

View File

@@ -82,7 +82,7 @@ func isUserAssignedToIssue(e Engine, issue *Issue, user *User) (isAssigned bool,
} }
// ClearAssigneeByUserID deletes all assignments of an user // ClearAssigneeByUserID deletes all assignments of an user
func clearAssigneeByUserID(sess Engine, userID int64) (err error) { func clearAssigneeByUserID(sess *xorm.Session, userID int64) (err error) {
_, err = sess.Delete(&IssueAssignees{AssigneeID: userID}) _, err = sess.Delete(&IssueAssignees{AssigneeID: userID})
return return
} }

View File

@@ -725,7 +725,6 @@ func createComment(e *xorm.Session, opts *CreateCommentOptions) (_ *Comment, err
RefAction: opts.RefAction, RefAction: opts.RefAction,
RefIsPull: opts.RefIsPull, RefIsPull: opts.RefIsPull,
IsForcePush: opts.IsForcePush, IsForcePush: opts.IsForcePush,
Invalidated: opts.Invalidated,
} }
if _, err = e.Insert(comment); err != nil { if _, err = e.Insert(comment); err != nil {
return nil, err return nil, err
@@ -892,7 +891,6 @@ type CreateCommentOptions struct {
RefAction references.XRefAction RefAction references.XRefAction
RefIsPull bool RefIsPull bool
IsForcePush bool IsForcePush bool
Invalidated bool
} }
// CreateComment creates comment of issue or commit. // CreateComment creates comment of issue or commit.
@@ -968,8 +966,6 @@ type FindCommentsOptions struct {
ReviewID int64 ReviewID int64
Since int64 Since int64
Before int64 Before int64
Line int64
TreePath string
Type CommentType Type CommentType
} }
@@ -993,12 +989,6 @@ func (opts *FindCommentsOptions) toConds() builder.Cond {
if opts.Type != CommentTypeUnknown { if opts.Type != CommentTypeUnknown {
cond = cond.And(builder.Eq{"comment.type": opts.Type}) cond = cond.And(builder.Eq{"comment.type": opts.Type})
} }
if opts.Line > 0 {
cond = cond.And(builder.Eq{"comment.line": opts.Line})
}
if len(opts.TreePath) > 0 {
cond = cond.And(builder.Eq{"comment.tree_path": opts.TreePath})
}
return cond return cond
} }
@@ -1013,8 +1003,6 @@ func findComments(e Engine, opts FindCommentsOptions) ([]*Comment, error) {
sess = opts.setSessionPagination(sess) sess = opts.setSessionPagination(sess)
} }
// WARNING: If you change this order you will need to fix createCodeComment
return comments, sess. return comments, sess.
Asc("comment.created_unix"). Asc("comment.created_unix").
Asc("comment.id"). Asc("comment.id").
@@ -1077,10 +1065,6 @@ func DeleteComment(comment *Comment, doer *User) error {
return err return err
} }
if err := deleteReaction(sess, &ReactionOptions{Comment: comment}); err != nil {
return err
}
return sess.Commit() return sess.Commit()
} }
@@ -1140,10 +1124,6 @@ func fetchCodeCommentsByReview(e Engine, issue *Issue, currentUser *User, review
return nil, err return nil, err
} }
if err := comment.LoadReactions(issue.Repo); err != nil {
return nil, err
}
if re, ok := reviews[comment.ReviewID]; ok && re != nil { if re, ok := reviews[comment.ReviewID]; ok && re != nil {
// If the review is pending only the author can see the comments (except the review is set) // If the review is pending only the author can see the comments (except the review is set)
if review.ID == 0 { if review.ID == 0 {

View File

@@ -47,7 +47,7 @@ type Label struct {
func GetLabelTemplateFile(name string) ([][3]string, error) { func GetLabelTemplateFile(name string) ([][3]string, error) {
data, err := GetRepoInitFile("label", name) data, err := GetRepoInitFile("label", name)
if err != nil { if err != nil {
return nil, ErrIssueLabelTemplateLoad{name, fmt.Errorf("GetRepoInitFile: %v", err)} return nil, fmt.Errorf("GetRepoInitFile: %v", err)
} }
lines := strings.Split(string(data), "\n") lines := strings.Split(string(data), "\n")
@@ -62,7 +62,7 @@ func GetLabelTemplateFile(name string) ([][3]string, error) {
fields := strings.SplitN(parts[0], " ", 2) fields := strings.SplitN(parts[0], " ", 2)
if len(fields) != 2 { if len(fields) != 2 {
return nil, ErrIssueLabelTemplateLoad{name, fmt.Errorf("line is malformed: %s", line)} return nil, fmt.Errorf("line is malformed: %s", line)
} }
color := strings.Trim(fields[0], " ") color := strings.Trim(fields[0], " ")
@@ -70,7 +70,7 @@ func GetLabelTemplateFile(name string) ([][3]string, error) {
color = "#" + color color = "#" + color
} }
if !LabelColorPattern.MatchString(color) { if !LabelColorPattern.MatchString(color) {
return nil, ErrIssueLabelTemplateLoad{name, fmt.Errorf("bad HTML color code in line: %s", line)} return nil, fmt.Errorf("bad HTML color code in line: %s", line)
} }
var description string var description string
@@ -167,7 +167,7 @@ func (label *Label) ForegroundColor() template.CSS {
func loadLabels(labelTemplate string) ([]string, error) { func loadLabels(labelTemplate string) ([]string, error) {
list, err := GetLabelTemplateFile(labelTemplate) list, err := GetLabelTemplateFile(labelTemplate)
if err != nil { if err != nil {
return nil, err return nil, ErrIssueLabelTemplateLoad{labelTemplate, err}
} }
labels := make([]string, len(list)) labels := make([]string, len(list))
@@ -186,7 +186,7 @@ func LoadLabelsFormatted(labelTemplate string) (string, error) {
func initializeLabels(e Engine, id int64, labelTemplate string, isOrg bool) error { func initializeLabels(e Engine, id int64, labelTemplate string, isOrg bool) error {
list, err := GetLabelTemplateFile(labelTemplate) list, err := GetLabelTemplateFile(labelTemplate)
if err != nil { if err != nil {
return err return ErrIssueLabelTemplateLoad{labelTemplate, err}
} }
labels := make([]*Label, len(list)) labels := make([]*Label, len(list))

View File

@@ -178,15 +178,11 @@ func CreateCommentReaction(doer *User, issue *Issue, comment *Comment, content s
}) })
} }
func deleteReaction(e Engine, opts *ReactionOptions) error { func deleteReaction(e *xorm.Session, opts *ReactionOptions) error {
reaction := &Reaction{ reaction := &Reaction{
Type: opts.Type, Type: opts.Type,
} UserID: opts.Doer.ID,
if opts.Doer != nil { IssueID: opts.Issue.ID,
reaction.UserID = opts.Doer.ID
}
if opts.Issue != nil {
reaction.IssueID = opts.Issue.ID
} }
if opts.Comment != nil { if opts.Comment != nil {
reaction.CommentID = opts.Comment.ID reaction.CommentID = opts.Comment.ID

View File

@@ -16,7 +16,6 @@ import (
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
"xorm.io/xorm" "xorm.io/xorm"
"xorm.io/xorm/names"
) )
const minDBVersion = 70 // Gitea 1.5.3 const minDBVersion = 70 // Gitea 1.5.3
@@ -297,8 +296,6 @@ func EnsureUpToDate(x *xorm.Engine) error {
// Migrate database to current version // Migrate database to current version
func Migrate(x *xorm.Engine) error { func Migrate(x *xorm.Engine) error {
// Set a new clean the default mapper to GonicMapper as that is the default for Gitea.
x.SetMapper(names.GonicMapper{})
if err := x.Sync(new(Version)); err != nil { if err := x.Sync(new(Version)); err != nil {
return fmt.Errorf("sync: %v", err) return fmt.Errorf("sync: %v", err)
} }
@@ -337,8 +334,6 @@ Please try upgrading to a lower version first (suggested v1.6.4), then upgrade t
// Migrate // Migrate
for i, m := range migrations[v-minDBVersion:] { for i, m := range migrations[v-minDBVersion:] {
log.Info("Migration[%d]: %s", v+int64(i), m.Description()) log.Info("Migration[%d]: %s", v+int64(i), m.Description())
// Reset the mapper between each migration - migrations are not supposed to depend on each other
x.SetMapper(names.GonicMapper{})
if err = m.Migrate(x); err != nil { if err = m.Migrate(x); err != nil {
return fmt.Errorf("do migrate: %v", err) return fmt.Errorf("do migrate: %v", err)
} }

View File

@@ -12,7 +12,7 @@ import (
func addKeepActivityPrivateUserColumn(x *xorm.Engine) error { func addKeepActivityPrivateUserColumn(x *xorm.Engine) error {
type User struct { type User struct {
KeepActivityPrivate bool `xorm:"NOT NULL DEFAULT false"` KeepActivityPrivate bool
} }
if err := x.Sync2(new(User)); err != nil { if err := x.Sync2(new(User)); err != nil {

View File

@@ -15,14 +15,12 @@ import (
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
"xorm.io/builder" // Needed for the MySQL driver
_ "github.com/go-sql-driver/mysql"
"xorm.io/xorm" "xorm.io/xorm"
"xorm.io/xorm/names" "xorm.io/xorm/names"
"xorm.io/xorm/schemas" "xorm.io/xorm/schemas"
// Needed for the MySQL driver
_ "github.com/go-sql-driver/mysql"
// Needed for the Postgresql driver // Needed for the Postgresql driver
_ "github.com/lib/pq" _ "github.com/lib/pq"
@@ -147,16 +145,7 @@ func getEngine() (*xorm.Engine, error) {
return nil, err return nil, err
} }
var engine *xorm.Engine engine, err := xorm.NewEngine(setting.Database.Type, connStr)
if setting.Database.UsePostgreSQL && len(setting.Database.Schema) > 0 {
// OK whilst we sort out our schema issues - create a schema aware postgres
registerPostgresSchemaDriver()
engine, err = xorm.NewEngine("postgresschema", connStr)
} else {
engine, err = xorm.NewEngine(setting.Database.Type, connStr)
}
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -166,6 +155,16 @@ func getEngine() (*xorm.Engine, error) {
engine.Dialect().SetParams(map[string]string{"DEFAULT_VARCHAR": "nvarchar"}) engine.Dialect().SetParams(map[string]string{"DEFAULT_VARCHAR": "nvarchar"})
} }
engine.SetSchema(setting.Database.Schema) engine.SetSchema(setting.Database.Schema)
if setting.Database.UsePostgreSQL && len(setting.Database.Schema) > 0 {
// Add the schema to the search path
if _, err := engine.Exec(`SELECT set_config(
'search_path',
? || ',' || current_setting('search_path'),
false)`,
setting.Database.Schema); err != nil {
return nil, err
}
}
return engine, nil return engine, nil
} }
@@ -314,13 +313,6 @@ func DumpDatabase(filePath string, dbType string) error {
tbs = append(tbs, t) tbs = append(tbs, t)
} }
// temporary fix for v1.13.x (https://github.com/go-gitea/gitea/issues/14069)
if _, err := x.Where(builder.IsNull{"keep_activity_private"}).
Cols("keep_activity_private").
Update(User{KeepActivityPrivate: false}); err != nil {
return err
}
type Version struct { type Version struct {
ID int64 `xorm:"pk autoincr"` ID int64 `xorm:"pk autoincr"`
Version int64 Version int64

View File

@@ -119,18 +119,8 @@ func InitOAuth2() error {
if err := oauth2.Init(x); err != nil { if err := oauth2.Init(x); err != nil {
return err return err
} }
return initOAuth2LoginSources()
}
// ResetOAuth2 clears existing OAuth2 providers and loads them from DB
func ResetOAuth2() error {
oauth2.ClearProviders()
return initOAuth2LoginSources()
}
// initOAuth2LoginSources is used to load and register all active OAuth2 providers
func initOAuth2LoginSources() error {
loginSources, _ := GetActiveOAuth2ProviderLoginSources() loginSources, _ := GetActiveOAuth2ProviderLoginSources()
for _, source := range loginSources { for _, source := range loginSources {
oAuth2Config := source.OAuth2() oAuth2Config := source.OAuth2()
err := oauth2.RegisterProvider(source.Name, oAuth2Config.Provider, oAuth2Config.ClientID, oAuth2Config.ClientSecret, oAuth2Config.OpenIDConnectAutoDiscoveryURL, oAuth2Config.CustomURLMapping) err := oauth2.RegisterProvider(source.Name, oAuth2Config.Provider, oAuth2Config.ClientID, oAuth2Config.ClientSecret, oAuth2Config.OpenIDConnectAutoDiscoveryURL, oAuth2Config.CustomURLMapping)

View File

@@ -54,11 +54,7 @@ func (r *Release) loadAttributes(e Engine) error {
if r.Publisher == nil { if r.Publisher == nil {
r.Publisher, err = getUserByID(e, r.PublisherID) r.Publisher, err = getUserByID(e, r.PublisherID)
if err != nil { if err != nil {
if IsErrUserNotExist(err) { return err
r.Publisher = NewGhostUser()
} else {
return err
}
} }
} }
return getReleaseAttachments(e, r) return getReleaseAttachments(e, r)

View File

@@ -426,7 +426,6 @@ func (repo *Repository) innerAPIFormat(e Engine, mode AccessMode, isParent bool)
HTMLURL: repo.HTMLURL(), HTMLURL: repo.HTMLURL(),
SSHURL: cloneLink.SSH, SSHURL: cloneLink.SSH,
CloneURL: cloneLink.HTTPS, CloneURL: cloneLink.HTTPS,
OriginalURL: repo.SanitizedOriginalURL(),
Website: repo.Website, Website: repo.Website,
Stars: repo.NumStars, Stars: repo.NumStars,
Forks: repo.NumForks, Forks: repo.NumForks,
@@ -1600,27 +1599,26 @@ func UpdateRepositoryUnits(repo *Repository, units []RepoUnit, deleteUnitTypes [
} }
// DeleteRepository deletes a repository for a user or organization. // DeleteRepository deletes a repository for a user or organization.
// make sure if you call this func to close open sessions (sqlite will otherwise get a deadlock)
func DeleteRepository(doer *User, uid, repoID int64) error { func DeleteRepository(doer *User, uid, repoID int64) error {
sess := x.NewSession()
defer sess.Close()
if err := sess.Begin(); err != nil {
return err
}
// In case is a organization. // In case is a organization.
org, err := getUserByID(sess, uid) org, err := GetUserByID(uid)
if err != nil { if err != nil {
return err return err
} }
if org.IsOrganization() { if org.IsOrganization() {
if err = org.getTeams(sess); err != nil { if err = org.GetTeams(&SearchTeamOptions{}); err != nil {
return err return err
} }
} }
repo := &Repository{OwnerID: uid} sess := x.NewSession()
has, err := sess.ID(repoID).Get(repo) defer sess.Close()
if err = sess.Begin(); err != nil {
return err
}
repo := &Repository{ID: repoID, OwnerID: uid}
has, err := sess.Get(repo)
if err != nil { if err != nil {
return err return err
} else if !has { } else if !has {
@@ -1769,7 +1767,14 @@ func DeleteRepository(doer *User, uid, repoID int64) error {
} }
if err = sess.Commit(); err != nil { if err = sess.Commit(); err != nil {
return err sess.Close()
if len(deployKeys) > 0 {
// We need to rewrite the public keys because the commit failed
if err2 := RewriteAllPublicKeys(); err2 != nil {
return fmt.Errorf("Commit: %v SSH Keys: %v", err, err2)
}
}
return fmt.Errorf("Commit: %v", err)
} }
sess.Close() sess.Close()

View File

@@ -271,27 +271,6 @@ func getUserRepoPermission(e Engine, repo *Repository, user *User) (perm Permiss
return return
} }
// IsUserRealRepoAdmin check if this user is real repo admin
func IsUserRealRepoAdmin(repo *Repository, user *User) (bool, error) {
if repo.OwnerID == user.ID {
return true, nil
}
sess := x.NewSession()
defer sess.Close()
if err := repo.getOwner(sess); err != nil {
return false, err
}
accessMode, err := accessLevel(sess, user, repo)
if err != nil {
return false, err
}
return accessMode >= AccessModeAdmin, nil
}
// IsUserRepoAdmin return true if user has admin right of a repo // IsUserRepoAdmin return true if user has admin right of a repo
func IsUserRepoAdmin(repo *Repository, user *User) (bool, error) { func IsUserRepoAdmin(repo *Repository, user *User) (bool, error) {
return isUserRepoAdmin(x, repo, user) return isUserRepoAdmin(x, repo, user)

View File

@@ -1,75 +0,0 @@
// Copyright 2020 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package models
import (
"database/sql"
"database/sql/driver"
"sync"
"code.gitea.io/gitea/modules/setting"
"github.com/lib/pq"
"xorm.io/xorm/dialects"
)
var registerOnce sync.Once
func registerPostgresSchemaDriver() {
registerOnce.Do(func() {
sql.Register("postgresschema", &postgresSchemaDriver{})
dialects.RegisterDriver("postgresschema", dialects.QueryDriver("postgres"))
})
}
type postgresSchemaDriver struct {
pq.Driver
}
// Open opens a new connection to the database. name is a connection string.
// This function opens the postgres connection in the default manner but immediately
// runs set_config to set the search_path appropriately
func (d *postgresSchemaDriver) Open(name string) (driver.Conn, error) {
conn, err := d.Driver.Open(name)
if err != nil {
return conn, err
}
schemaValue, _ := driver.String.ConvertValue(setting.Database.Schema)
// golangci lint is incorrect here - there is no benefit to using driver.ExecerContext here
// and in any case pq does not implement it
if execer, ok := conn.(driver.Execer); ok { //nolint
_, err := execer.Exec(`SELECT set_config(
'search_path',
$1 || ',' || current_setting('search_path'),
false)`, []driver.Value{schemaValue}) //nolint
if err != nil {
_ = conn.Close()
return nil, err
}
return conn, nil
}
stmt, err := conn.Prepare(`SELECT set_config(
'search_path',
$1 || ',' || current_setting('search_path'),
false)`)
if err != nil {
_ = conn.Close()
return nil, err
}
defer stmt.Close()
// driver.String.ConvertValue will never return err for string
// golangci lint is incorrect here - there is no benefit to using stmt.ExecWithContext here
_, err = stmt.Exec([]driver.Value{schemaValue}) //nolint
if err != nil {
_ = conn.Close()
return nil, err
}
return conn, nil
}

View File

@@ -147,27 +147,6 @@ func GetMigratingTask(repoID int64) (*Task, error) {
return &task, nil return &task, nil
} }
// GetMigratingTaskByID returns the migrating task by repo's id
func GetMigratingTaskByID(id, doerID int64) (*Task, *migration.MigrateOptions, error) {
var task = Task{
ID: id,
DoerID: doerID,
Type: structs.TaskTypeMigrateRepo,
}
has, err := x.Get(&task)
if err != nil {
return nil, nil, err
} else if !has {
return nil, nil, ErrTaskDoesNotExist{id, 0, task.Type}
}
var opts migration.MigrateOptions
if err := json.Unmarshal([]byte(task.PayloadContent), &opts); err != nil {
return nil, nil, err
}
return &task, &opts, nil
}
// FindTaskOptions find all tasks // FindTaskOptions find all tasks
type FindTaskOptions struct { type FindTaskOptions struct {
Status int Status int

View File

@@ -197,13 +197,10 @@ func FindTopics(opts *FindTopicOptions) (topics []*Topic, err error) {
// GetRepoTopicByName retrives topic from name for a repo if it exist // GetRepoTopicByName retrives topic from name for a repo if it exist
func GetRepoTopicByName(repoID int64, topicName string) (*Topic, error) { func GetRepoTopicByName(repoID int64, topicName string) (*Topic, error) {
return getRepoTopicByName(x, repoID, topicName)
}
func getRepoTopicByName(e Engine, repoID int64, topicName string) (*Topic, error) {
var cond = builder.NewCond() var cond = builder.NewCond()
var topic Topic var topic Topic
cond = cond.And(builder.Eq{"repo_topic.repo_id": repoID}).And(builder.Eq{"topic.name": topicName}) cond = cond.And(builder.Eq{"repo_topic.repo_id": repoID}).And(builder.Eq{"topic.name": topicName})
sess := e.Table("topic").Where(cond) sess := x.Table("topic").Where(cond)
sess.Join("INNER", "repo_topic", "repo_topic.topic_id = topic.id") sess.Join("INNER", "repo_topic", "repo_topic.topic_id = topic.id")
has, err := sess.Get(&topic) has, err := sess.Get(&topic)
if has { if has {
@@ -214,13 +211,7 @@ func getRepoTopicByName(e Engine, repoID int64, topicName string) (*Topic, error
// AddTopic adds a topic name to a repository (if it does not already have it) // AddTopic adds a topic name to a repository (if it does not already have it)
func AddTopic(repoID int64, topicName string) (*Topic, error) { func AddTopic(repoID int64, topicName string) (*Topic, error) {
sess := x.NewSession() topic, err := GetRepoTopicByName(repoID, topicName)
defer sess.Close()
if err := sess.Begin(); err != nil {
return nil, err
}
topic, err := getRepoTopicByName(sess, repoID, topicName)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -229,25 +220,7 @@ func AddTopic(repoID int64, topicName string) (*Topic, error) {
return topic, nil return topic, nil
} }
topic, err = addTopicByNameToRepo(sess, repoID, topicName) return addTopicByNameToRepo(x, repoID, topicName)
if err != nil {
return nil, err
}
topicNames := make([]string, 0, 25)
if err := sess.Select("name").Table("topic").
Join("INNER", "repo_topic", "repo_topic.topic_id = topic.id").
Where("repo_topic.repo_id = ?", repoID).Desc("topic.repo_count").Find(&topicNames); err != nil {
return nil, err
}
if _, err := sess.ID(repoID).Cols("topics").Update(&Repository{
Topics: topicNames,
}); err != nil {
return nil, err
}
return topic, sess.Commit()
} }
// DeleteTopic removes a topic name from a repository (if it has it) // DeleteTopic removes a topic name from a repository (if it has it)

View File

@@ -40,6 +40,7 @@ import (
"golang.org/x/crypto/scrypt" "golang.org/x/crypto/scrypt"
"golang.org/x/crypto/ssh" "golang.org/x/crypto/ssh"
"xorm.io/builder" "xorm.io/builder"
"xorm.io/xorm"
) )
// UserType defines the user type // UserType defines the user type
@@ -190,6 +191,9 @@ func (u *User) BeforeUpdate() {
if len(u.AvatarEmail) == 0 { if len(u.AvatarEmail) == 0 {
u.AvatarEmail = u.Email u.AvatarEmail = u.Email
} }
if len(u.AvatarEmail) > 0 && u.Avatar == "" {
u.Avatar = base.HashEmail(u.AvatarEmail)
}
} }
u.LowerName = strings.ToLower(u.Name) u.LowerName = strings.ToLower(u.Name)
@@ -550,7 +554,6 @@ func (u *User) GetOwnedOrganizations() (err error) {
} }
// GetOrganizations returns paginated organizations that user belongs to. // GetOrganizations returns paginated organizations that user belongs to.
// TODO: does not respect All and show orgs you privately participate
func (u *User) GetOrganizations(opts *SearchOrganizationsOptions) error { func (u *User) GetOrganizations(opts *SearchOrganizationsOptions) error {
sess := x.NewSession() sess := x.NewSession()
defer sess.Close() defer sess.Close()
@@ -821,10 +824,6 @@ func CreateUser(u *User) (err error) {
return ErrEmailAlreadyUsed{u.Email} return ErrEmailAlreadyUsed{u.Email}
} }
if err = ValidateEmail(u.Email); err != nil {
return err
}
isExist, err = isEmailUsed(sess, u.Email) isExist, err = isEmailUsed(sess, u.Email)
if err != nil { if err != nil {
return err return err
@@ -836,6 +835,7 @@ func CreateUser(u *User) (err error) {
u.LowerName = strings.ToLower(u.Name) u.LowerName = strings.ToLower(u.Name)
u.AvatarEmail = u.Email u.AvatarEmail = u.Email
u.Avatar = base.HashEmail(u.AvatarEmail)
if u.Rands, err = GetUserSalt(); err != nil { if u.Rands, err = GetUserSalt(); err != nil {
return err return err
} }
@@ -922,7 +922,6 @@ func VerifyActiveEmailCode(code, email string) *EmailAddress {
// ChangeUserName changes all corresponding setting from old user name to new one. // ChangeUserName changes all corresponding setting from old user name to new one.
func ChangeUserName(u *User, newUserName string) (err error) { func ChangeUserName(u *User, newUserName string) (err error) {
oldUserName := u.Name
if err = IsUsableUsername(newUserName); err != nil { if err = IsUsableUsername(newUserName); err != nil {
return err return err
} }
@@ -940,24 +939,16 @@ func ChangeUserName(u *User, newUserName string) (err error) {
return err return err
} }
if _, err = sess.Exec("UPDATE `repository` SET owner_name=? WHERE owner_name=?", newUserName, oldUserName); err != nil { if _, err = sess.Exec("UPDATE `repository` SET owner_name=? WHERE owner_name=?", newUserName, u.Name); err != nil {
return fmt.Errorf("Change repo owner name: %v", err) return fmt.Errorf("Change repo owner name: %v", err)
} }
// Do not fail if directory does not exist // Do not fail if directory does not exist
if err = os.Rename(UserPath(oldUserName), UserPath(newUserName)); err != nil && !os.IsNotExist(err) { if err = os.Rename(UserPath(u.Name), UserPath(newUserName)); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("Rename user directory: %v", err) return fmt.Errorf("Rename user directory: %v", err)
} }
if err = sess.Commit(); err != nil { return sess.Commit()
if err2 := os.Rename(UserPath(newUserName), UserPath(oldUserName)); err2 != nil && !os.IsNotExist(err2) {
log.Critical("Unable to rollback directory change during failed username change from: %s to: %s. DB Error: %v. Filesystem Error: %v", oldUserName, newUserName, err, err2)
return fmt.Errorf("failed to rollback directory change during failed username change from: %s to: %s. DB Error: %w. Filesystem Error: %v", oldUserName, newUserName, err, err2)
}
return err
}
return nil
} }
// checkDupEmail checks whether there are the same email with the user // checkDupEmail checks whether there are the same email with the user
@@ -976,12 +967,8 @@ func checkDupEmail(e Engine, u *User) error {
return nil return nil
} }
func updateUser(e Engine, u *User) (err error) { func updateUser(e Engine, u *User) error {
u.Email = strings.ToLower(u.Email) _, err := e.ID(u.ID).AllCols().Update(u)
if err = ValidateEmail(u.Email); err != nil {
return err
}
_, err = e.ID(u.ID).AllCols().Update(u)
return err return err
} }
@@ -1001,21 +988,13 @@ func updateUserCols(e Engine, u *User, cols ...string) error {
} }
// UpdateUserSetting updates user's settings. // UpdateUserSetting updates user's settings.
func UpdateUserSetting(u *User) (err error) { func UpdateUserSetting(u *User) error {
sess := x.NewSession()
defer sess.Close()
if err = sess.Begin(); err != nil {
return err
}
if !u.IsOrganization() { if !u.IsOrganization() {
if err = checkDupEmail(sess, u); err != nil { if err := checkDupEmail(x, u); err != nil {
return err return err
} }
} }
if err = updateUser(sess, u); err != nil { return updateUser(x, u)
return err
}
return sess.Commit()
} }
// deleteBeans deletes all given beans, beans should contain delete conditions. // deleteBeans deletes all given beans, beans should contain delete conditions.
@@ -1028,7 +1007,8 @@ func deleteBeans(e Engine, beans ...interface{}) (err error) {
return nil return nil
} }
func deleteUser(e Engine, u *User) error { // FIXME: need some kind of mechanism to record failure. HINT: system notice
func deleteUser(e *xorm.Session, u *User) error {
// Note: A user owns any repository or belongs to any organization // Note: A user owns any repository or belongs to any organization
// cannot perform delete operation. // cannot perform delete operation.
@@ -1142,21 +1122,18 @@ func deleteUser(e Engine, u *User) error {
return fmt.Errorf("Delete: %v", err) return fmt.Errorf("Delete: %v", err)
} }
// FIXME: system notice
// Note: There are something just cannot be roll back, // Note: There are something just cannot be roll back,
// so just keep error logs of those operations. // so just keep error logs of those operations.
path := UserPath(u.Name) path := UserPath(u.Name)
if err = util.RemoveAll(path); err != nil { if err := util.RemoveAll(path); err != nil {
err = fmt.Errorf("Failed to RemoveAll %s: %v", path, err) return fmt.Errorf("Failed to RemoveAll %s: %v", path, err)
_ = createNotice(e, NoticeTask, fmt.Sprintf("delete user '%s': %v", u.Name, err))
return err
} }
if len(u.Avatar) > 0 { if len(u.Avatar) > 0 {
avatarPath := u.CustomAvatarRelativePath() avatarPath := u.CustomAvatarRelativePath()
if err = storage.Avatars.Delete(avatarPath); err != nil { if err := storage.Avatars.Delete(avatarPath); err != nil {
err = fmt.Errorf("Failed to remove %s: %v", avatarPath, err) return fmt.Errorf("Failed to remove %s: %v", avatarPath, err)
_ = createNotice(e, NoticeTask, fmt.Sprintf("delete user '%s': %v", u.Name, err))
return err
} }
} }

View File

@@ -39,10 +39,12 @@ func (u *User) generateRandomAvatar(e Engine) error {
if err != nil { if err != nil {
return fmt.Errorf("RandomImage: %v", err) return fmt.Errorf("RandomImage: %v", err)
} }
// NOTICE for random avatar, it still uses id as avatar name, but custom avatar use md5
// since random image is not a user's photo, there is no security for enumable
if u.Avatar == "" {
u.Avatar = fmt.Sprintf("%d", u.ID)
}
u.Avatar = base.HashEmail(seed)
// Don't share the images so that we can delete them easily
if err := storage.SaveFrom(storage.Avatars, u.CustomAvatarRelativePath(), func(w io.Writer) error { if err := storage.SaveFrom(storage.Avatars, u.CustomAvatarRelativePath(), func(w io.Writer) error {
if err := png.Encode(w, img); err != nil { if err := png.Encode(w, img); err != nil {
log.Error("Encode: %v", err) log.Error("Encode: %v", err)
@@ -132,7 +134,7 @@ func (u *User) UploadAvatar(data []byte) error {
// Otherwise, if any of the users delete his avatar // Otherwise, if any of the users delete his avatar
// Other users will lose their avatars too. // Other users will lose their avatars too.
u.Avatar = fmt.Sprintf("%x", md5.Sum([]byte(fmt.Sprintf("%d-%x", u.ID, md5.Sum(data))))) u.Avatar = fmt.Sprintf("%x", md5.Sum([]byte(fmt.Sprintf("%d-%x", u.ID, md5.Sum(data)))))
if err = updateUserCols(sess, u, "use_custom_avatar", "avatar"); err != nil { if err = updateUser(sess, u); err != nil {
return fmt.Errorf("updateUser: %v", err) return fmt.Errorf("updateUser: %v", err)
} }

View File

@@ -17,7 +17,7 @@ func TestGetUserHeatmapDataByUser(t *testing.T) {
CountResult int CountResult int
JSONResult string JSONResult string
}{ }{
{2, 1, `[{"timestamp":1603152000,"contributions":1}]`}, {2, 1, `[{"timestamp":1571616000,"contributions":1}]`},
{3, 0, `[]`}, {3, 0, `[]`},
} }
// Prepare // Prepare

View File

@@ -8,7 +8,6 @@ package models
import ( import (
"errors" "errors"
"fmt" "fmt"
"net/mail"
"strings" "strings"
"code.gitea.io/gitea/modules/log" "code.gitea.io/gitea/modules/log"
@@ -33,19 +32,6 @@ type EmailAddress struct {
IsPrimary bool `xorm:"-"` IsPrimary bool `xorm:"-"`
} }
// ValidateEmail check if email is a allowed address
func ValidateEmail(email string) error {
if len(email) == 0 {
return nil
}
if _, err := mail.ParseAddress(email); err != nil {
return ErrEmailInvalid{email}
}
return nil
}
// GetEmailAddresses returns all email addresses belongs to given user. // GetEmailAddresses returns all email addresses belongs to given user.
func GetEmailAddresses(uid int64) ([]*EmailAddress, error) { func GetEmailAddresses(uid int64) ([]*EmailAddress, error) {
emails := make([]*EmailAddress, 0, 5) emails := make([]*EmailAddress, 0, 5)
@@ -157,10 +143,6 @@ func addEmailAddress(e Engine, email *EmailAddress) error {
return ErrEmailAlreadyUsed{email.Email} return ErrEmailAlreadyUsed{email.Email}
} }
if err = ValidateEmail(email.Email); err != nil {
return err
}
_, err = e.Insert(email) _, err = e.Insert(email)
return err return err
} }
@@ -185,9 +167,6 @@ func AddEmailAddresses(emails []*EmailAddress) error {
} else if used { } else if used {
return ErrEmailAlreadyUsed{emails[i].Email} return ErrEmailAlreadyUsed{emails[i].Email}
} }
if err = ValidateEmail(emails[i].Email); err != nil {
return err
}
} }
if _, err := x.Insert(emails); err != nil { if _, err := x.Insert(emails); err != nil {

View File

@@ -346,21 +346,6 @@ func TestCreateUser(t *testing.T) {
assert.NoError(t, DeleteUser(user)) assert.NoError(t, DeleteUser(user))
} }
func TestCreateUserInvalidEmail(t *testing.T) {
user := &User{
Name: "GiteaBot",
Email: "GiteaBot@gitea.io\r\n",
Passwd: ";p['////..-++']",
IsAdmin: false,
Theme: setting.UI.DefaultTheme,
MustChangePassword: false,
}
err := CreateUser(user)
assert.Error(t, err)
assert.True(t, IsErrEmailInvalid(err))
}
func TestCreateUser_Issue5882(t *testing.T) { func TestCreateUser_Issue5882(t *testing.T) {
// Init settings // Init settings

View File

@@ -118,11 +118,6 @@ func RemoveProvider(providerName string) {
delete(goth.GetProviders(), providerName) delete(goth.GetProviders(), providerName)
} }
// ClearProviders clears all OAuth2 providers from the goth lib
func ClearProviders() {
goth.ClearProviders()
}
// used to create different types of goth providers // used to create different types of goth providers
func createProvider(providerName, providerType, clientID, clientSecret, openIDConnectAutoDiscoveryURL string, customURLMapping *CustomURLMapping) (goth.Provider, error) { func createProvider(providerName, providerType, clientID, clientSecret, openIDConnectAutoDiscoveryURL string, customURLMapping *CustomURLMapping) (goth.Provider, error) {
callbackURL := setting.AppURL + "user/oauth2/" + url.PathEscape(providerName) + "/callback" callbackURL := setting.AppURL + "user/oauth2/" + url.PathEscape(providerName) + "/callback"

View File

@@ -12,9 +12,6 @@ import (
"github.com/msteinert/pam" "github.com/msteinert/pam"
) )
// Supported is true when built with PAM
var Supported = true
// Auth pam auth service // Auth pam auth service
func Auth(serviceName, userName, passwd string) (string, error) { func Auth(serviceName, userName, passwd string) (string, error) {
t, err := pam.StartFunc(serviceName, userName, func(s pam.Style, msg string) (string, error) { t, err := pam.StartFunc(serviceName, userName, func(s pam.Style, msg string) (string, error) {

View File

@@ -10,9 +10,6 @@ import (
"errors" "errors"
) )
// Supported is false when built without PAM
var Supported = false
// Auth not supported lack of pam tag // Auth not supported lack of pam tag
func Auth(serviceName, userName, passwd string) (string, error) { func Auth(serviceName, userName, passwd string) (string, error) {
return "", errors.New("PAM not supported") return "", errors.New("PAM not supported")

View File

@@ -102,9 +102,6 @@ func ParseRemoteAddr(remoteAddr, authUsername, authPassword string, user *models
u.User = url.UserPassword(authUsername, authPassword) u.User = url.UserPassword(authUsername, authPassword)
} }
remoteAddr = u.String() remoteAddr = u.String()
if u.Scheme == "git" && u.Port() != "" && (strings.Contains(remoteAddr, "%0d") || strings.Contains(remoteAddr, "%0a")) {
return "", models.ErrInvalidCloneAddr{IsURLError: true}
}
} else if !user.CanImportLocal() { } else if !user.CanImportLocal() {
return "", models.ErrInvalidCloneAddr{IsPermissionDenied: true} return "", models.ErrInvalidCloneAddr{IsPermissionDenied: true}
} else if !com.IsDir(remoteAddr) { } else if !com.IsDir(remoteAddr) {

View File

@@ -199,6 +199,7 @@ func (f *AccessTokenForm) Validate(ctx *macaron.Context, errs binding.Errors) bi
type UpdateProfileForm struct { type UpdateProfileForm struct {
Name string `binding:"AlphaDashDot;MaxSize(40)"` Name string `binding:"AlphaDashDot;MaxSize(40)"`
FullName string `binding:"MaxSize(100)"` FullName string `binding:"MaxSize(100)"`
Email string `binding:"Required;Email;MaxSize(254)"`
KeepEmailPrivate bool KeepEmailPrivate bool
Website string `binding:"ValidUrl;MaxSize(255)"` Website string `binding:"ValidUrl;MaxSize(255)"`
Location string `binding:"MaxSize(50)"` Location string `binding:"MaxSize(50)"`

View File

@@ -10,7 +10,6 @@ import (
"crypto/sha256" "crypto/sha256"
"encoding/base64" "encoding/base64"
"encoding/hex" "encoding/hex"
"errors"
"fmt" "fmt"
"net/http" "net/http"
"net/url" "net/url"
@@ -66,11 +65,6 @@ func BasicAuthDecode(encoded string) (string, string, error) {
} }
auth := strings.SplitN(string(s), ":", 2) auth := strings.SplitN(string(s), ":", 2)
if len(auth) != 2 {
return "", "", errors.New("invalid basic authentication")
}
return auth[0], auth[1], nil return auth[0], auth[1], nil
} }

View File

@@ -46,12 +46,6 @@ func TestBasicAuthDecode(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "foo", user) assert.Equal(t, "foo", user)
assert.Equal(t, "bar", pass) assert.Equal(t, "bar", pass)
_, _, err = BasicAuthDecode("aW52YWxpZA==")
assert.Error(t, err)
_, _, err = BasicAuthDecode("invalid")
assert.Error(t, err)
} }
func TestBasicAuthEncode(t *testing.T) { func TestBasicAuthEncode(t *testing.T) {

View File

@@ -255,61 +255,3 @@ func (ctx *APIContext) NotFound(objs ...interface{}) {
"errors": errors, "errors": errors,
}) })
} }
// RepoRefForAPI handles repository reference names when the ref name is not explicitly given
func RepoRefForAPI() macaron.Handler {
return func(ctx *APIContext) {
// Empty repository does not have reference information.
if ctx.Repo.Repository.IsEmpty {
return
}
var err error
if ctx.Repo.GitRepo == nil {
repoPath := models.RepoPath(ctx.Repo.Owner.Name, ctx.Repo.Repository.Name)
ctx.Repo.GitRepo, err = git.OpenRepository(repoPath)
if err != nil {
ctx.InternalServerError(err)
return
}
// We opened it, we should close it
defer func() {
// If it's been set to nil then assume someone else has closed it.
if ctx.Repo.GitRepo != nil {
ctx.Repo.GitRepo.Close()
}
}()
}
refName := getRefName(ctx.Context, RepoRefAny)
if ctx.Repo.GitRepo.IsBranchExist(refName) {
ctx.Repo.Commit, err = ctx.Repo.GitRepo.GetBranchCommit(refName)
if err != nil {
ctx.InternalServerError(err)
return
}
ctx.Repo.CommitID = ctx.Repo.Commit.ID.String()
} else if ctx.Repo.GitRepo.IsTagExist(refName) {
ctx.Repo.Commit, err = ctx.Repo.GitRepo.GetTagCommit(refName)
if err != nil {
ctx.InternalServerError(err)
return
}
ctx.Repo.CommitID = ctx.Repo.Commit.ID.String()
} else if len(refName) == 40 {
ctx.Repo.CommitID = refName
ctx.Repo.Commit, err = ctx.Repo.GitRepo.GetCommit(refName)
if err != nil {
ctx.NotFound("GetCommit", err)
return
}
} else {
ctx.NotFound(fmt.Errorf("not exist: '%s'", ctx.Params("*")))
return
}
ctx.Next()
}
}

View File

@@ -704,6 +704,7 @@ func RepoRefByType(refType RepoRefType) macaron.Handler {
err error err error
) )
// For API calls.
if ctx.Repo.GitRepo == nil { if ctx.Repo.GitRepo == nil {
repoPath := models.RepoPath(ctx.Repo.Owner.Name, ctx.Repo.Repository.Name) repoPath := models.RepoPath(ctx.Repo.Owner.Name, ctx.Repo.Repository.Name)
ctx.Repo.GitRepo, err = git.OpenRepository(repoPath) ctx.Repo.GitRepo, err = git.OpenRepository(repoPath)
@@ -772,7 +773,7 @@ func RepoRefByType(refType RepoRefType) macaron.Handler {
ctx.Repo.Commit, err = ctx.Repo.GitRepo.GetCommit(refName) ctx.Repo.Commit, err = ctx.Repo.GitRepo.GetCommit(refName)
if err != nil { if err != nil {
ctx.NotFound("GetCommit", err) ctx.NotFound("GetCommit", nil)
return return
} }
} else { } else {

View File

@@ -27,7 +27,7 @@ type BlameReader struct {
cmd *exec.Cmd cmd *exec.Cmd
pid int64 pid int64
output io.ReadCloser output io.ReadCloser
reader *bufio.Reader scanner *bufio.Scanner
lastSha *string lastSha *string
cancel context.CancelFunc cancel context.CancelFunc
} }
@@ -38,30 +38,23 @@ var shaLineRegex = regexp.MustCompile("^([a-z0-9]{40})")
func (r *BlameReader) NextPart() (*BlamePart, error) { func (r *BlameReader) NextPart() (*BlamePart, error) {
var blamePart *BlamePart var blamePart *BlamePart
reader := r.reader scanner := r.scanner
if r.lastSha != nil { if r.lastSha != nil {
blamePart = &BlamePart{*r.lastSha, make([]string, 0)} blamePart = &BlamePart{*r.lastSha, make([]string, 0)}
} }
var line []byte for scanner.Scan() {
var isPrefix bool line := scanner.Text()
var err error
for err != io.EOF {
line, isPrefix, err = reader.ReadLine()
if err != nil && err != io.EOF {
return blamePart, err
}
// Skip empty lines
if len(line) == 0 { if len(line) == 0 {
// isPrefix will be false
continue continue
} }
lines := shaLineRegex.FindSubmatch(line) lines := shaLineRegex.FindStringSubmatch(line)
if lines != nil { if lines != nil {
sha1 := string(lines[1]) sha1 := lines[1]
if blamePart == nil { if blamePart == nil {
blamePart = &BlamePart{sha1, make([]string, 0)} blamePart = &BlamePart{sha1, make([]string, 0)}
@@ -69,27 +62,12 @@ func (r *BlameReader) NextPart() (*BlamePart, error) {
if blamePart.Sha != sha1 { if blamePart.Sha != sha1 {
r.lastSha = &sha1 r.lastSha = &sha1
// need to munch to end of line...
for isPrefix {
_, isPrefix, err = reader.ReadLine()
if err != nil && err != io.EOF {
return blamePart, err
}
}
return blamePart, nil return blamePart, nil
} }
} else if line[0] == '\t' { } else if line[0] == '\t' {
code := line[1:] code := line[1:]
blamePart.Lines = append(blamePart.Lines, string(code)) blamePart.Lines = append(blamePart.Lines, code)
}
// need to munch to end of line...
for isPrefix {
_, isPrefix, err = reader.ReadLine()
if err != nil && err != io.EOF {
return blamePart, err
}
} }
} }
@@ -143,13 +121,13 @@ func createBlameReader(ctx context.Context, dir string, command ...string) (*Bla
pid := process.GetManager().Add(fmt.Sprintf("GetBlame [repo_path: %s]", dir), cancel) pid := process.GetManager().Add(fmt.Sprintf("GetBlame [repo_path: %s]", dir), cancel)
reader := bufio.NewReader(stdout) scanner := bufio.NewScanner(stdout)
return &BlameReader{ return &BlameReader{
cmd, cmd,
pid, pid,
stdout, stdout,
reader, scanner,
nil, nil,
cancel, cancel,
}, nil }, nil

View File

@@ -153,7 +153,6 @@ func (c *Command) RunInDirTimeoutEnvFullPipelineFunc(env []string, timeout time.
err := fn(ctx, cancel) err := fn(ctx, cancel)
if err != nil { if err != nil {
cancel() cancel()
_ = cmd.Wait()
return err return err
} }
} }

View File

@@ -32,7 +32,6 @@ var (
GitExecutable = "git" GitExecutable = "git"
// DefaultContext is the default context to run git commands in // DefaultContext is the default context to run git commands in
// will be overwritten by Init with HammerContext
DefaultContext = context.Background() DefaultContext = context.Background()
gitVersion *version.Version gitVersion *version.Version

View File

@@ -8,7 +8,6 @@ package git
import ( import (
"bytes" "bytes"
"container/list" "container/list"
"context"
"errors" "errors"
"fmt" "fmt"
"os" "os"
@@ -167,24 +166,19 @@ type CloneRepoOptions struct {
// Clone clones original repository to target path. // Clone clones original repository to target path.
func Clone(from, to string, opts CloneRepoOptions) (err error) { func Clone(from, to string, opts CloneRepoOptions) (err error) {
return CloneWithContext(DefaultContext, from, to, opts)
}
// CloneWithContext clones original repository to target path.
func CloneWithContext(ctx context.Context, from, to string, opts CloneRepoOptions) (err error) {
cargs := make([]string, len(GlobalCommandArgs)) cargs := make([]string, len(GlobalCommandArgs))
copy(cargs, GlobalCommandArgs) copy(cargs, GlobalCommandArgs)
return CloneWithArgs(ctx, from, to, cargs, opts) return CloneWithArgs(from, to, cargs, opts)
} }
// CloneWithArgs original repository to target path. // CloneWithArgs original repository to target path.
func CloneWithArgs(ctx context.Context, from, to string, args []string, opts CloneRepoOptions) (err error) { func CloneWithArgs(from, to string, args []string, opts CloneRepoOptions) (err error) {
toDir := path.Dir(to) toDir := path.Dir(to)
if err = os.MkdirAll(toDir, os.ModePerm); err != nil { if err = os.MkdirAll(toDir, os.ModePerm); err != nil {
return err return err
} }
cmd := NewCommandContextNoGlobals(ctx, args...).AddArguments("clone") cmd := NewCommandNoGlobals(args...).AddArguments("clone")
if opts.Mirror { if opts.Mirror {
cmd.AddArguments("--mirror") cmd.AddArguments("--mirror")
} }

View File

@@ -13,7 +13,6 @@ import (
"strings" "strings"
"sync" "sync"
"code.gitea.io/gitea/modules/analyze"
"code.gitea.io/gitea/modules/log" "code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
"github.com/alecthomas/chroma/formatters/html" "github.com/alecthomas/chroma/formatters/html"
@@ -118,11 +117,9 @@ func File(numLines int, fileName string, code []byte) map[int]string {
fileName = "test." + val fileName = "test." + val
} }
language := analyze.GetCodeLanguage(fileName, code) lexer := lexers.Match(fileName)
lexer := lexers.Get(language)
if lexer == nil { if lexer == nil {
lexer = lexers.Match(fileName) lexer = lexers.Analyse(string(code))
if lexer == nil { if lexer == nil {
lexer = lexers.Fallback lexer = lexers.Fallback
} }

View File

@@ -8,7 +8,6 @@ import (
"crypto/sha256" "crypto/sha256"
"encoding/hex" "encoding/hex"
"errors" "errors"
"fmt"
"io" "io"
"os" "os"
@@ -22,21 +21,6 @@ var (
errSizeMismatch = errors.New("Content size does not match") errSizeMismatch = errors.New("Content size does not match")
) )
// ErrRangeNotSatisfiable represents an error which request range is not satisfiable.
type ErrRangeNotSatisfiable struct {
FromByte int64
}
func (err ErrRangeNotSatisfiable) Error() string {
return fmt.Sprintf("Requested range %d is not satisfiable", err.FromByte)
}
// IsErrRangeNotSatisfiable returns true if the error is an ErrRangeNotSatisfiable
func IsErrRangeNotSatisfiable(err error) bool {
_, ok := err.(ErrRangeNotSatisfiable)
return ok
}
// ContentStore provides a simple file system based storage. // ContentStore provides a simple file system based storage.
type ContentStore struct { type ContentStore struct {
storage.ObjectStorage storage.ObjectStorage
@@ -51,12 +35,7 @@ func (s *ContentStore) Get(meta *models.LFSMetaObject, fromByte int64) (io.ReadC
return nil, err return nil, err
} }
if fromByte > 0 { if fromByte > 0 {
if fromByte >= meta.Size { _, err = f.Seek(fromByte, os.SEEK_CUR)
return nil, ErrRangeNotSatisfiable{
FromByte: fromByte,
}
}
_, err = f.Seek(fromByte, io.SeekStart)
if err != nil { if err != nil {
log.Error("Whilst trying to read LFS OID[%s]: Unable to seek to %d Error: %v", meta.Oid, fromByte, err) log.Error("Whilst trying to read LFS OID[%s]: Unable to seek to %d Error: %v", meta.Oid, fromByte, err)
} }

View File

@@ -191,12 +191,8 @@ func getContentHandler(ctx *context.Context) {
contentStore := &ContentStore{ObjectStorage: storage.LFS} contentStore := &ContentStore{ObjectStorage: storage.LFS}
content, err := contentStore.Get(meta, fromByte) content, err := contentStore.Get(meta, fromByte)
if err != nil { if err != nil {
if IsErrRangeNotSatisfiable(err) { // Errors are logged in contentStore.Get
writeStatus(ctx, http.StatusRequestedRangeNotSatisfiable) writeStatus(ctx, 404)
} else {
// Errors are logged in contentStore.Get
writeStatus(ctx, 404)
}
return return
} }
defer content.Close() defer content.Close()

View File

@@ -43,7 +43,7 @@ var (
// sha1CurrentPattern matches string that represents a commit SHA, e.g. d8a994ef243349f321568f9e36d5c3f444b99cae // sha1CurrentPattern matches string that represents a commit SHA, e.g. d8a994ef243349f321568f9e36d5c3f444b99cae
// Although SHA1 hashes are 40 chars long, the regex matches the hash from 7 to 40 chars in length // Although SHA1 hashes are 40 chars long, the regex matches the hash from 7 to 40 chars in length
// so that abbreviated hash links can be used as well. This matches git and github useability. // so that abbreviated hash links can be used as well. This matches git and github useability.
sha1CurrentPattern = regexp.MustCompile(`(?:\s|^|\(|\[)([0-9a-f]{7,40})(?:\s|$|\)|\]|[.,](\s|$))`) sha1CurrentPattern = regexp.MustCompile(`(?:\s|^|\(|\[)([0-9a-f]{7,40})(?:\s|$|\)|\]|\.(\s|$))`)
// shortLinkPattern matches short but difficult to parse [[name|link|arg=test]] syntax // shortLinkPattern matches short but difficult to parse [[name|link|arg=test]] syntax
shortLinkPattern = regexp.MustCompile(`\[\[(.*?)\]\](\w*)`) shortLinkPattern = regexp.MustCompile(`\[\[(.*?)\]\](\w*)`)
@@ -298,6 +298,9 @@ func RenderEmoji(
return ctx.postProcess(rawHTML) return ctx.postProcess(rawHTML)
} }
var byteBodyTag = []byte("<body>")
var byteBodyTagClosing = []byte("</body>")
func (ctx *postProcessCtx) postProcess(rawHTML []byte) ([]byte, error) { func (ctx *postProcessCtx) postProcess(rawHTML []byte) ([]byte, error) {
if ctx.procs == nil { if ctx.procs == nil {
ctx.procs = defaultProcessors ctx.procs = defaultProcessors
@@ -305,9 +308,9 @@ func (ctx *postProcessCtx) postProcess(rawHTML []byte) ([]byte, error) {
// give a generous extra 50 bytes // give a generous extra 50 bytes
res := make([]byte, 0, len(rawHTML)+50) res := make([]byte, 0, len(rawHTML)+50)
res = append(res, "<html><body>"...) res = append(res, byteBodyTag...)
res = append(res, rawHTML...) res = append(res, rawHTML...)
res = append(res, "</body></html>"...) res = append(res, byteBodyTagClosing...)
// parse the HTML // parse the HTML
nodes, err := html.ParseFragment(bytes.NewReader(res), nil) nodes, err := html.ParseFragment(bytes.NewReader(res), nil)
@@ -319,31 +322,6 @@ func (ctx *postProcessCtx) postProcess(rawHTML []byte) ([]byte, error) {
ctx.visitNode(node, true) ctx.visitNode(node, true)
} }
newNodes := make([]*html.Node, 0, len(nodes))
for _, node := range nodes {
if node.Data == "html" {
node = node.FirstChild
for node != nil && node.Data != "body" {
node = node.NextSibling
}
}
if node == nil {
continue
}
if node.Data == "body" {
child := node.FirstChild
for child != nil {
newNodes = append(newNodes, child)
child = child.NextSibling
}
} else {
newNodes = append(newNodes, node)
}
}
nodes = newNodes
// Create buffer in which the data will be placed again. We know that the // Create buffer in which the data will be placed again. We know that the
// length will be at least that of res; to spare a few alloc+copy, we // length will be at least that of res; to spare a few alloc+copy, we
// reuse res, resetting its length to 0. // reuse res, resetting its length to 0.
@@ -356,8 +334,12 @@ func (ctx *postProcessCtx) postProcess(rawHTML []byte) ([]byte, error) {
} }
} }
// remove initial parts - because Render creates a whole HTML page.
res = buf.Bytes()
res = res[bytes.Index(res, byteBodyTag)+len(byteBodyTag) : bytes.LastIndex(res, byteBodyTagClosing)]
// Everything done successfully, return parsed data. // Everything done successfully, return parsed data.
return buf.Bytes(), nil return res, nil
} }
func (ctx *postProcessCtx) visitNode(node *html.Node, visitText bool) { func (ctx *postProcessCtx) visitNode(node *html.Node, visitText bool) {
@@ -650,18 +632,16 @@ func shortLinkProcessorFull(ctx *postProcessCtx, node *html.Node, noLink bool) {
// When parsing HTML, x/net/html will change all quotes which are // When parsing HTML, x/net/html will change all quotes which are
// not used for syntax into UTF-8 quotes. So checking val[0] won't // not used for syntax into UTF-8 quotes. So checking val[0] won't
// be enough, since that only checks a single byte. // be enough, since that only checks a single byte.
if len(val) > 1 { if (strings.HasPrefix(val, "“") && strings.HasSuffix(val, "”")) ||
if (strings.HasPrefix(val, "") && strings.HasSuffix(val, "")) || (strings.HasPrefix(val, "") && strings.HasSuffix(val, "")) {
(strings.HasPrefix(val, "") && strings.HasSuffix(val, "")) { const lenQuote = len("")
const lenQuote = len("") val = val[lenQuote : len(val)-lenQuote]
val = val[lenQuote : len(val)-lenQuote] } else if (strings.HasPrefix(val, "\"") && strings.HasSuffix(val, "\"")) ||
} else if (strings.HasPrefix(val, "\"") && strings.HasSuffix(val, "\"")) || (strings.HasPrefix(val, "'") && strings.HasSuffix(val, "'")) {
(strings.HasPrefix(val, "'") && strings.HasSuffix(val, "'")) { val = val[1 : len(val)-1]
val = val[1 : len(val)-1] } else if strings.HasPrefix(val, "'") && strings.HasSuffix(val, "") {
} else if strings.HasPrefix(val, "'") && strings.HasSuffix(val, "") { const lenQuote = len("")
const lenQuote = len("") val = val[1 : len(val)-lenQuote]
val = val[1 : len(val)-lenQuote]
}
} }
props[key] = val props[key] = val
} }

View File

@@ -46,12 +46,6 @@ func TestRender_Commits(t *testing.T) {
test("/home/gitea/"+sha, "<p>/home/gitea/"+sha+"</p>") test("/home/gitea/"+sha, "<p>/home/gitea/"+sha+"</p>")
test("deadbeef", `<p>deadbeef</p>`) test("deadbeef", `<p>deadbeef</p>`)
test("d27ace93", `<p>d27ace93</p>`) test("d27ace93", `<p>d27ace93</p>`)
test(sha[:14]+".x", `<p>`+sha[:14]+`.x</p>`)
expected14 := `<a href="` + commit[:len(commit)-(40-14)] + `" rel="nofollow"><code>` + sha[:10] + `</code></a>`
test(sha[:14]+".", `<p>`+expected14+`.</p>`)
test(sha[:14]+",", `<p>`+expected14+`,</p>`)
test("["+sha[:14]+"]", `<p>[`+expected14+`]</p>`)
} }
func TestRender_CrossReferences(t *testing.T) { func TestRender_CrossReferences(t *testing.T) {
@@ -148,7 +142,7 @@ func TestRender_links(t *testing.T) {
`<p><a href="ftp://gitea.com/file.txt" rel="nofollow">ftp://gitea.com/file.txt</a></p>`) `<p><a href="ftp://gitea.com/file.txt" rel="nofollow">ftp://gitea.com/file.txt</a></p>`)
test( test(
"magnet:?xt=urn:btih:5dee65101db281ac9c46344cd6b175cdcadabcde&dn=download", "magnet:?xt=urn:btih:5dee65101db281ac9c46344cd6b175cdcadabcde&dn=download",
`<p><a href="magnet:?xt=urn%3Abtih%3A5dee65101db281ac9c46344cd6b175cdcadabcde&dn=download" rel="nofollow">magnet:?xt=urn:btih:5dee65101db281ac9c46344cd6b175cdcadabcde&amp;dn=download</a></p>`) `<p><a href="magnet:?dn=download&xt=urn%3Abtih%3A5dee65101db281ac9c46344cd6b175cdcadabcde" rel="nofollow">magnet:?xt=urn:btih:5dee65101db281ac9c46344cd6b175cdcadabcde&amp;dn=download</a></p>`)
// Test that should *not* be turned into URL // Test that should *not* be turned into URL
test( test(
@@ -383,28 +377,3 @@ func TestRender_ShortLinks(t *testing.T) {
`<p><a href="https://example.org" rel="nofollow">[[foobar]]</a></p>`, `<p><a href="https://example.org" rel="nofollow">[[foobar]]</a></p>`,
`<p><a href="https://example.org" rel="nofollow">[[foobar]]</a></p>`) `<p><a href="https://example.org" rel="nofollow">[[foobar]]</a></p>`)
} }
func Test_ParseClusterFuzz(t *testing.T) {
setting.AppURL = AppURL
setting.AppSubURL = AppSubURL
var localMetas = map[string]string{
"user": "go-gitea",
"repo": "gitea",
}
data := "<A><maTH><tr><MN><bodY ÿ><temPlate></template><tH><tr></A><tH><d<bodY "
val, err := PostProcess([]byte(data), "https://example.com", localMetas, false)
assert.NoError(t, err)
assert.NotContains(t, string(val), "<html")
data = "<!DOCTYPE html>\n<A><maTH><tr><MN><bodY ÿ><temPlate></template><tH><tr></A><tH><d<bodY "
val, err = PostProcess([]byte(data), "https://example.com", localMetas, false)
assert.NoError(t, err)
assert.NotContains(t, string(val), "<html")
}

View File

@@ -1,46 +0,0 @@
// Copyright 2019 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package matchlist
import (
"strings"
"github.com/gobwas/glob"
)
// Matchlist represents a block or allow list
type Matchlist struct {
ruleGlobs []glob.Glob
}
// NewMatchlist creates a new block or allow list
func NewMatchlist(rules ...string) (*Matchlist, error) {
for i := range rules {
rules[i] = strings.ToLower(rules[i])
}
list := Matchlist{
ruleGlobs: make([]glob.Glob, 0, len(rules)),
}
for _, rule := range rules {
rg, err := glob.Compile(rule)
if err != nil {
return nil, err
}
list.ruleGlobs = append(list.ruleGlobs, rg)
}
return &list, nil
}
// Match will matches
func (b *Matchlist) Match(u string) bool {
for _, r := range b.ruleGlobs {
if r.Match(u) {
return true
}
}
return false
}

View File

@@ -14,7 +14,6 @@ import (
"strings" "strings"
"time" "time"
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/log" "code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/migrations/base" "code.gitea.io/gitea/modules/migrations/base"
"code.gitea.io/gitea/modules/structs" "code.gitea.io/gitea/modules/structs"
@@ -48,7 +47,7 @@ func (f *GiteaDownloaderFactory) New(ctx context.Context, opts base.MigrateOptio
path := strings.Split(repoNameSpace, "/") path := strings.Split(repoNameSpace, "/")
if len(path) < 2 { if len(path) < 2 {
return nil, fmt.Errorf("invalid path: %s", repoNameSpace) return nil, fmt.Errorf("invalid path")
} }
repoPath := strings.Join(path[len(path)-2:], "/") repoPath := strings.Join(path[len(path)-2:], "/")
@@ -88,7 +87,7 @@ func NewGiteaDownloader(ctx context.Context, baseURL, repoPath, username, passwo
gitea_sdk.SetContext(ctx), gitea_sdk.SetContext(ctx),
) )
if err != nil { if err != nil {
log.Error(fmt.Sprintf("Failed to create NewGiteaDownloader for: %s. Error: %v", baseURL, err)) log.Error(fmt.Sprintf("NewGiteaDownloader: %s", err.Error()))
return nil, err return nil, err
} }
@@ -102,13 +101,12 @@ func NewGiteaDownloader(ctx context.Context, baseURL, repoPath, username, passwo
// set small maxPerPage since we can only guess // set small maxPerPage since we can only guess
// (default would be 50 but this can differ) // (default would be 50 but this can differ)
maxPerPage := 10 maxPerPage := 10
// gitea instances >=1.13 can tell us what maximum they have // new gitea instances can tell us what maximum they have
apiConf, _, err := giteaClient.GetGlobalAPISettings() if giteaClient.CheckServerVersionConstraint(">=1.13.0") == nil {
if err != nil { apiConf, _, err := giteaClient.GetGlobalAPISettings()
log.Info("Unable to get global API settings. Ignoring these.") if err != nil {
log.Debug("giteaClient.GetGlobalAPISettings. Error: %v", err) return nil, err
} }
if apiConf != nil {
maxPerPage = apiConf.MaxResponseItems maxPerPage = apiConf.MaxResponseItems
} }
@@ -326,44 +324,45 @@ func (g *GiteaDownloader) GetAsset(_ string, relID, id int64) (io.ReadCloser, er
} }
func (g *GiteaDownloader) getIssueReactions(index int64) ([]*base.Reaction, error) { func (g *GiteaDownloader) getIssueReactions(index int64) ([]*base.Reaction, error) {
var reactions []*base.Reaction
if err := g.client.CheckServerVersionConstraint(">=1.11"); err != nil { if err := g.client.CheckServerVersionConstraint(">=1.11"); err != nil {
log.Info("GiteaDownloader: instance to old, skip getIssueReactions") log.Info("GiteaDownloader: instance to old, skip getIssueReactions")
return []*base.Reaction{}, nil return reactions, nil
} }
rl, _, err := g.client.GetIssueReactions(g.repoOwner, g.repoName, index) rl, _, err := g.client.GetIssueReactions(g.repoOwner, g.repoName, index)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return g.convertReactions(rl), nil for _, reaction := range rl {
reactions = append(reactions, &base.Reaction{
UserID: reaction.User.ID,
UserName: reaction.User.UserName,
Content: reaction.Reaction,
})
}
return reactions, nil
} }
func (g *GiteaDownloader) getCommentReactions(commentID int64) ([]*base.Reaction, error) { func (g *GiteaDownloader) getCommentReactions(commentID int64) ([]*base.Reaction, error) {
var reactions []*base.Reaction
if err := g.client.CheckServerVersionConstraint(">=1.11"); err != nil { if err := g.client.CheckServerVersionConstraint(">=1.11"); err != nil {
log.Info("GiteaDownloader: instance to old, skip getCommentReactions") log.Info("GiteaDownloader: instance to old, skip getCommentReactions")
return []*base.Reaction{}, nil return reactions, nil
} }
rl, _, err := g.client.GetIssueCommentReactions(g.repoOwner, g.repoName, commentID) rl, _, err := g.client.GetIssueCommentReactions(g.repoOwner, g.repoName, commentID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return g.convertReactions(rl), nil
}
func (g *GiteaDownloader) convertReactions(rl []*gitea_sdk.Reaction) []*base.Reaction {
var reactions []*base.Reaction
for i := range rl { for i := range rl {
if rl[i].User.ID <= 0 {
continue
}
reactions = append(reactions, &base.Reaction{ reactions = append(reactions, &base.Reaction{
UserID: rl[i].User.ID, UserID: rl[i].User.ID,
UserName: rl[i].User.UserName, UserName: rl[i].User.UserName,
Content: rl[i].Reaction, Content: rl[i].Reaction,
}) })
} }
return reactions return reactions, nil
} }
// GetIssues returns issues according start and limit // GetIssues returns issues according start and limit
@@ -395,11 +394,7 @@ func (g *GiteaDownloader) GetIssues(page, perPage int) ([]*base.Issue, bool, err
reactions, err := g.getIssueReactions(issue.Index) reactions, err := g.getIssueReactions(issue.Index)
if err != nil { if err != nil {
log.Warn("Unable to load reactions during migrating issue #%d to %s/%s. Error: %v", issue.Index, g.repoOwner, g.repoName, err) return nil, false, fmt.Errorf("error while loading reactions: %v", err)
if err2 := models.CreateRepositoryNotice(
fmt.Sprintf("Unable to load reactions during migrating issue #%d to %s/%s. Error: %v", issue.Index, g.repoOwner, g.repoName, err)); err2 != nil {
log.Error("create repository notice failed: ", err2)
}
} }
var assignees []string var assignees []string
@@ -450,17 +445,13 @@ func (g *GiteaDownloader) GetComments(index int64) ([]*base.Comment, error) {
// Page: i, // Page: i,
}}) }})
if err != nil { if err != nil {
return nil, fmt.Errorf("error while listing comments for issue #%d. Error: %v", index, err) return nil, fmt.Errorf("error while listing comments: %v", err)
} }
for _, comment := range comments { for _, comment := range comments {
reactions, err := g.getCommentReactions(comment.ID) reactions, err := g.getCommentReactions(comment.ID)
if err != nil { if err != nil {
log.Warn("Unable to load comment reactions during migrating issue #%d for comment %d to %s/%s. Error: %v", index, comment.ID, g.repoOwner, g.repoName, err) return nil, fmt.Errorf("error while listing comment creactions: %v", err)
if err2 := models.CreateRepositoryNotice(
fmt.Sprintf("Unable to load reactions during migrating issue #%d for comment %d to %s/%s. Error: %v", index, comment.ID, g.repoOwner, g.repoName, err)); err2 != nil {
log.Error("create repository notice failed: ", err2)
}
} }
allComments = append(allComments, &base.Comment{ allComments = append(allComments, &base.Comment{
@@ -498,7 +489,7 @@ func (g *GiteaDownloader) GetPullRequests(page, perPage int) ([]*base.PullReques
State: gitea_sdk.StateAll, State: gitea_sdk.StateAll,
}) })
if err != nil { if err != nil {
return nil, false, fmt.Errorf("error while listing pull requests (page: %d, pagesize: %d). Error: %v", page, perPage, err) return nil, false, fmt.Errorf("error while listing repos: %v", err)
} }
for _, pr := range prs { for _, pr := range prs {
var milestone string var milestone string
@@ -529,7 +520,7 @@ func (g *GiteaDownloader) GetPullRequests(page, perPage int) ([]*base.PullReques
if headSHA == "" { if headSHA == "" {
headCommit, _, err := g.client.GetSingleCommit(g.repoOwner, g.repoName, url.PathEscape(pr.Head.Ref)) headCommit, _, err := g.client.GetSingleCommit(g.repoOwner, g.repoName, url.PathEscape(pr.Head.Ref))
if err != nil { if err != nil {
return nil, false, fmt.Errorf("error while resolving head git ref: %s for pull #%d. Error: %v", pr.Head.Ref, pr.Index, err) return nil, false, fmt.Errorf("error while resolving git ref: %v", err)
} }
headSHA = headCommit.SHA headSHA = headCommit.SHA
} }
@@ -542,11 +533,7 @@ func (g *GiteaDownloader) GetPullRequests(page, perPage int) ([]*base.PullReques
reactions, err := g.getIssueReactions(pr.Index) reactions, err := g.getIssueReactions(pr.Index)
if err != nil { if err != nil {
log.Warn("Unable to load reactions during migrating pull #%d to %s/%s. Error: %v", pr.Index, g.repoOwner, g.repoName, err) return nil, false, fmt.Errorf("error while loading reactions: %v", err)
if err2 := models.CreateRepositoryNotice(
fmt.Sprintf("Unable to load reactions during migrating pull #%d to %s/%s. Error: %v", pr.Index, g.repoOwner, g.repoName, err)); err2 != nil {
log.Error("create repository notice failed: ", err2)
}
} }
var assignees []string var assignees []string

View File

@@ -28,7 +28,6 @@ import (
"code.gitea.io/gitea/modules/storage" "code.gitea.io/gitea/modules/storage"
"code.gitea.io/gitea/modules/structs" "code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/timeutil" "code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/services/pull"
gouuid "github.com/google/uuid" gouuid "github.com/google/uuid"
) )
@@ -125,7 +124,7 @@ func (g *GiteaLocalUploader) CreateRepo(repo *base.Repository, opts base.Migrate
} }
r.DefaultBranch = repo.DefaultBranch r.DefaultBranch = repo.DefaultBranch
r, err = repository.MigrateRepositoryGitData(g.ctx, owner, r, base.MigrateOptions{ r, err = repository.MigrateRepositoryGitData(g.doer, owner, r, base.MigrateOptions{
RepoName: g.repoName, RepoName: g.repoName,
Description: repo.Description, Description: repo.Description,
OriginalURL: repo.OriginalURL, OriginalURL: repo.OriginalURL,
@@ -154,15 +153,6 @@ func (g *GiteaLocalUploader) Close() {
// CreateTopics creates topics // CreateTopics creates topics
func (g *GiteaLocalUploader) CreateTopics(topics ...string) error { func (g *GiteaLocalUploader) CreateTopics(topics ...string) error {
// ignore topics to long for the db
c := 0
for i := range topics {
if len(topics[i]) <= 25 {
topics[c] = topics[i]
c++
}
}
topics = topics[:c]
return models.SaveTopics(g.repo.ID, topics...) return models.SaveTopics(g.repo.ID, topics...)
} }
@@ -534,7 +524,6 @@ func (g *GiteaLocalUploader) CreatePullRequests(prs ...*base.PullRequest) error
} }
for _, pr := range gprs { for _, pr := range gprs {
g.issues.Store(pr.Issue.Index, pr.Issue.ID) g.issues.Store(pr.Issue.Index, pr.Issue.ID)
pull.AddToTaskQueue(pr)
} }
return nil return nil
} }

View File

@@ -65,25 +65,23 @@ func (f *GithubDownloaderV3Factory) GitServiceType() structs.GitServiceType {
// GithubDownloaderV3 implements a Downloader interface to get repository informations // GithubDownloaderV3 implements a Downloader interface to get repository informations
// from github via APIv3 // from github via APIv3
type GithubDownloaderV3 struct { type GithubDownloaderV3 struct {
ctx context.Context ctx context.Context
client *github.Client client *github.Client
repoOwner string repoOwner string
repoName string repoName string
userName string userName string
password string password string
rate *github.Rate rate *github.Rate
maxPerPage int
} }
// NewGithubDownloaderV3 creates a github Downloader via github v3 API // NewGithubDownloaderV3 creates a github Downloader via github v3 API
func NewGithubDownloaderV3(ctx context.Context, baseURL, userName, password, token, repoOwner, repoName string) *GithubDownloaderV3 { func NewGithubDownloaderV3(ctx context.Context, baseURL, userName, password, token, repoOwner, repoName string) *GithubDownloaderV3 {
var downloader = GithubDownloaderV3{ var downloader = GithubDownloaderV3{
userName: userName, userName: userName,
password: password, password: password,
ctx: ctx, ctx: ctx,
repoOwner: repoOwner, repoOwner: repoOwner,
repoName: repoName, repoName: repoName,
maxPerPage: 100,
} }
client := &http.Client{ client := &http.Client{
@@ -179,7 +177,7 @@ func (g *GithubDownloaderV3) GetTopics() ([]string, error) {
// GetMilestones returns milestones // GetMilestones returns milestones
func (g *GithubDownloaderV3) GetMilestones() ([]*base.Milestone, error) { func (g *GithubDownloaderV3) GetMilestones() ([]*base.Milestone, error) {
var perPage = g.maxPerPage var perPage = 100
var milestones = make([]*base.Milestone, 0, perPage) var milestones = make([]*base.Milestone, 0, perPage)
for i := 1; ; i++ { for i := 1; ; i++ {
g.sleep() g.sleep()
@@ -235,7 +233,7 @@ func convertGithubLabel(label *github.Label) *base.Label {
// GetLabels returns labels // GetLabels returns labels
func (g *GithubDownloaderV3) GetLabels() ([]*base.Label, error) { func (g *GithubDownloaderV3) GetLabels() ([]*base.Label, error) {
var perPage = g.maxPerPage var perPage = 100
var labels = make([]*base.Label, 0, perPage) var labels = make([]*base.Label, 0, perPage)
for i := 1; ; i++ { for i := 1; ; i++ {
g.sleep() g.sleep()
@@ -306,7 +304,7 @@ func (g *GithubDownloaderV3) convertGithubRelease(rel *github.RepositoryRelease)
// GetReleases returns releases // GetReleases returns releases
func (g *GithubDownloaderV3) GetReleases() ([]*base.Release, error) { func (g *GithubDownloaderV3) GetReleases() ([]*base.Release, error) {
var perPage = g.maxPerPage var perPage = 100
var releases = make([]*base.Release, 0, perPage) var releases = make([]*base.Release, 0, perPage)
for i := 1; ; i++ { for i := 1; ; i++ {
g.sleep() g.sleep()
@@ -344,9 +342,6 @@ func (g *GithubDownloaderV3) GetAsset(_ string, _, id int64) (io.ReadCloser, err
// GetIssues returns issues according start and limit // GetIssues returns issues according start and limit
func (g *GithubDownloaderV3) GetIssues(page, perPage int) ([]*base.Issue, bool, error) { func (g *GithubDownloaderV3) GetIssues(page, perPage int) ([]*base.Issue, bool, error) {
if perPage > g.maxPerPage {
perPage = g.maxPerPage
}
opt := &github.IssueListByRepoOptions{ opt := &github.IssueListByRepoOptions{
Sort: "created", Sort: "created",
Direction: "asc", Direction: "asc",
@@ -434,7 +429,7 @@ func (g *GithubDownloaderV3) GetIssues(page, perPage int) ([]*base.Issue, bool,
// GetComments returns comments according issueNumber // GetComments returns comments according issueNumber
func (g *GithubDownloaderV3) GetComments(issueNumber int64) ([]*base.Comment, error) { func (g *GithubDownloaderV3) GetComments(issueNumber int64) ([]*base.Comment, error) {
var ( var (
allComments = make([]*base.Comment, 0, g.maxPerPage) allComments = make([]*base.Comment, 0, 100)
created = "created" created = "created"
asc = "asc" asc = "asc"
) )
@@ -442,7 +437,7 @@ func (g *GithubDownloaderV3) GetComments(issueNumber int64) ([]*base.Comment, er
Sort: &created, Sort: &created,
Direction: &asc, Direction: &asc,
ListOptions: github.ListOptions{ ListOptions: github.ListOptions{
PerPage: g.maxPerPage, PerPage: 100,
}, },
} }
for { for {
@@ -464,7 +459,7 @@ func (g *GithubDownloaderV3) GetComments(issueNumber int64) ([]*base.Comment, er
g.sleep() g.sleep()
res, resp, err := g.client.Reactions.ListIssueCommentReactions(g.ctx, g.repoOwner, g.repoName, comment.GetID(), &github.ListOptions{ res, resp, err := g.client.Reactions.ListIssueCommentReactions(g.ctx, g.repoOwner, g.repoName, comment.GetID(), &github.ListOptions{
Page: i, Page: i,
PerPage: g.maxPerPage, PerPage: 100,
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -502,9 +497,6 @@ func (g *GithubDownloaderV3) GetComments(issueNumber int64) ([]*base.Comment, er
// GetPullRequests returns pull requests according page and perPage // GetPullRequests returns pull requests according page and perPage
func (g *GithubDownloaderV3) GetPullRequests(page, perPage int) ([]*base.PullRequest, bool, error) { func (g *GithubDownloaderV3) GetPullRequests(page, perPage int) ([]*base.PullRequest, bool, error) {
if perPage > g.maxPerPage {
perPage = g.maxPerPage
}
opt := &github.PullRequestListOptions{ opt := &github.PullRequestListOptions{
Sort: "created", Sort: "created",
Direction: "asc", Direction: "asc",
@@ -658,7 +650,7 @@ func (g *GithubDownloaderV3) convertGithubReviewComments(cs []*github.PullReques
g.sleep() g.sleep()
res, resp, err := g.client.Reactions.ListPullRequestCommentReactions(g.ctx, g.repoOwner, g.repoName, c.GetID(), &github.ListOptions{ res, resp, err := g.client.Reactions.ListPullRequestCommentReactions(g.ctx, g.repoOwner, g.repoName, c.GetID(), &github.ListOptions{
Page: i, Page: i,
PerPage: g.maxPerPage, PerPage: 100,
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -695,9 +687,9 @@ func (g *GithubDownloaderV3) convertGithubReviewComments(cs []*github.PullReques
// GetReviews returns pull requests review // GetReviews returns pull requests review
func (g *GithubDownloaderV3) GetReviews(pullRequestNumber int64) ([]*base.Review, error) { func (g *GithubDownloaderV3) GetReviews(pullRequestNumber int64) ([]*base.Review, error) {
var allReviews = make([]*base.Review, 0, g.maxPerPage) var allReviews = make([]*base.Review, 0, 100)
opt := &github.ListOptions{ opt := &github.ListOptions{
PerPage: g.maxPerPage, PerPage: 100,
} }
for { for {
g.sleep() g.sleep()
@@ -711,7 +703,7 @@ func (g *GithubDownloaderV3) GetReviews(pullRequestNumber int64) ([]*base.Review
r.IssueIndex = pullRequestNumber r.IssueIndex = pullRequestNumber
// retrieve all review comments // retrieve all review comments
opt2 := &github.ListOptions{ opt2 := &github.ListOptions{
PerPage: g.maxPerPage, PerPage: 100,
} }
for { for {
g.sleep() g.sleep()

View File

@@ -11,7 +11,6 @@ import (
"io" "io"
"net/http" "net/http"
"net/url" "net/url"
"path"
"strings" "strings"
"time" "time"
@@ -69,7 +68,6 @@ type GitlabDownloader struct {
repoName string repoName string
issueCount int64 issueCount int64
fetchPRcomments bool fetchPRcomments bool
maxPerPage int
} }
// NewGitlabDownloader creates a gitlab Downloader via gitlab API // NewGitlabDownloader creates a gitlab Downloader via gitlab API
@@ -88,30 +86,6 @@ func NewGitlabDownloader(ctx context.Context, baseURL, repoPath, username, passw
return nil, err return nil, err
} }
// split namespace and subdirectory
pathParts := strings.Split(strings.Trim(repoPath, "/"), "/")
var resp *gitlab.Response
u, _ := url.Parse(baseURL)
for len(pathParts) >= 2 {
_, resp, err = gitlabClient.Version.GetVersion()
if err == nil || resp != nil && resp.StatusCode == 401 {
err = nil // if no authentication given, this still should work
break
}
u.Path = path.Join(u.Path, pathParts[0])
baseURL = u.String()
pathParts = pathParts[1:]
_ = gitlab.WithBaseURL(baseURL)(gitlabClient)
repoPath = strings.Join(pathParts, "/")
}
if err != nil {
log.Trace("Error could not get gitlab version: %v", err)
return nil, err
}
log.Trace("gitlab downloader: use BaseURL: '%s' and RepoPath: '%s'", baseURL, repoPath)
// Grab and store project/repo ID here, due to issues using the URL escaped path // Grab and store project/repo ID here, due to issues using the URL escaped path
gr, _, err := gitlabClient.Projects.GetProject(repoPath, nil, nil, gitlab.WithContext(ctx)) gr, _, err := gitlabClient.Projects.GetProject(repoPath, nil, nil, gitlab.WithContext(ctx))
if err != nil { if err != nil {
@@ -125,11 +99,10 @@ func NewGitlabDownloader(ctx context.Context, baseURL, repoPath, username, passw
} }
return &GitlabDownloader{ return &GitlabDownloader{
ctx: ctx, ctx: ctx,
client: gitlabClient, client: gitlabClient,
repoID: gr.ID, repoID: gr.ID,
repoName: gr.Name, repoName: gr.Name,
maxPerPage: 100,
}, nil }, nil
} }
@@ -186,7 +159,7 @@ func (g *GitlabDownloader) GetTopics() ([]string, error) {
// GetMilestones returns milestones // GetMilestones returns milestones
func (g *GitlabDownloader) GetMilestones() ([]*base.Milestone, error) { func (g *GitlabDownloader) GetMilestones() ([]*base.Milestone, error) {
var perPage = g.maxPerPage var perPage = 100
var state = "all" var state = "all"
var milestones = make([]*base.Milestone, 0, perPage) var milestones = make([]*base.Milestone, 0, perPage)
for i := 1; ; i++ { for i := 1; ; i++ {
@@ -257,7 +230,7 @@ func (g *GitlabDownloader) normalizeColor(val string) string {
// GetLabels returns labels // GetLabels returns labels
func (g *GitlabDownloader) GetLabels() ([]*base.Label, error) { func (g *GitlabDownloader) GetLabels() ([]*base.Label, error) {
var perPage = g.maxPerPage var perPage = 100
var labels = make([]*base.Label, 0, perPage) var labels = make([]*base.Label, 0, perPage)
for i := 1; ; i++ { for i := 1; ; i++ {
ls, _, err := g.client.Labels.ListLabels(g.repoID, &gitlab.ListLabelsOptions{ListOptions: gitlab.ListOptions{ ls, _, err := g.client.Labels.ListLabels(g.repoID, &gitlab.ListLabelsOptions{ListOptions: gitlab.ListOptions{
@@ -308,7 +281,7 @@ func (g *GitlabDownloader) convertGitlabRelease(rel *gitlab.Release) *base.Relea
// GetReleases returns releases // GetReleases returns releases
func (g *GitlabDownloader) GetReleases() ([]*base.Release, error) { func (g *GitlabDownloader) GetReleases() ([]*base.Release, error) {
var perPage = g.maxPerPage var perPage = 100
var releases = make([]*base.Release, 0, perPage) var releases = make([]*base.Release, 0, perPage)
for i := 1; ; i++ { for i := 1; ; i++ {
ls, _, err := g.client.Releases.ListReleases(g.repoID, &gitlab.ListReleasesOptions{ ls, _, err := g.client.Releases.ListReleases(g.repoID, &gitlab.ListReleasesOptions{
@@ -357,10 +330,6 @@ func (g *GitlabDownloader) GetIssues(page, perPage int) ([]*base.Issue, bool, er
state := "all" state := "all"
sort := "asc" sort := "asc"
if perPage > g.maxPerPage {
perPage = g.maxPerPage
}
opt := &gitlab.ListProjectIssuesOptions{ opt := &gitlab.ListProjectIssuesOptions{
State: &state, State: &state,
Sort: &sort, Sort: &sort,
@@ -432,7 +401,7 @@ func (g *GitlabDownloader) GetIssues(page, perPage int) ([]*base.Issue, bool, er
// GetComments returns comments according issueNumber // GetComments returns comments according issueNumber
// TODO: figure out how to transfer comment reactions // TODO: figure out how to transfer comment reactions
func (g *GitlabDownloader) GetComments(issueNumber int64) ([]*base.Comment, error) { func (g *GitlabDownloader) GetComments(issueNumber int64) ([]*base.Comment, error) {
var allComments = make([]*base.Comment, 0, g.maxPerPage) var allComments = make([]*base.Comment, 0, 100)
var page = 1 var page = 1
var realIssueNumber int64 var realIssueNumber int64
@@ -446,14 +415,14 @@ func (g *GitlabDownloader) GetComments(issueNumber int64) ([]*base.Comment, erro
realIssueNumber = issueNumber realIssueNumber = issueNumber
comments, resp, err = g.client.Discussions.ListIssueDiscussions(g.repoID, int(realIssueNumber), &gitlab.ListIssueDiscussionsOptions{ comments, resp, err = g.client.Discussions.ListIssueDiscussions(g.repoID, int(realIssueNumber), &gitlab.ListIssueDiscussionsOptions{
Page: page, Page: page,
PerPage: g.maxPerPage, PerPage: 100,
}, nil, gitlab.WithContext(g.ctx)) }, nil, gitlab.WithContext(g.ctx))
} else { } else {
// If this is a PR, we need to figure out the Gitlab/original PR ID to be passed below // If this is a PR, we need to figure out the Gitlab/original PR ID to be passed below
realIssueNumber = issueNumber - g.issueCount realIssueNumber = issueNumber - g.issueCount
comments, resp, err = g.client.Discussions.ListMergeRequestDiscussions(g.repoID, int(realIssueNumber), &gitlab.ListMergeRequestDiscussionsOptions{ comments, resp, err = g.client.Discussions.ListMergeRequestDiscussions(g.repoID, int(realIssueNumber), &gitlab.ListMergeRequestDiscussionsOptions{
Page: page, Page: page,
PerPage: g.maxPerPage, PerPage: 100,
}, nil, gitlab.WithContext(g.ctx)) }, nil, gitlab.WithContext(g.ctx))
} }
@@ -496,10 +465,6 @@ func (g *GitlabDownloader) GetComments(issueNumber int64) ([]*base.Comment, erro
// GetPullRequests returns pull requests according page and perPage // GetPullRequests returns pull requests according page and perPage
func (g *GitlabDownloader) GetPullRequests(page, perPage int) ([]*base.PullRequest, bool, error) { func (g *GitlabDownloader) GetPullRequests(page, perPage int) ([]*base.PullRequest, bool, error) {
if perPage > g.maxPerPage {
perPage = g.maxPerPage
}
opt := &gitlab.ListProjectMergeRequestsOptions{ opt := &gitlab.ListProjectMergeRequestsOptions{
ListOptions: gitlab.ListOptions{ ListOptions: gitlab.ListOptions{
PerPage: perPage, PerPage: perPage,
@@ -609,12 +574,8 @@ func (g *GitlabDownloader) GetPullRequests(page, perPage int) ([]*base.PullReque
// GetReviews returns pull requests review // GetReviews returns pull requests review
func (g *GitlabDownloader) GetReviews(pullRequestNumber int64) ([]*base.Review, error) { func (g *GitlabDownloader) GetReviews(pullRequestNumber int64) ([]*base.Review, error) {
state, resp, err := g.client.MergeRequestApprovals.GetApprovalState(g.repoID, int(pullRequestNumber), gitlab.WithContext(g.ctx)) state, _, err := g.client.MergeRequestApprovals.GetApprovalState(g.repoID, int(pullRequestNumber), gitlab.WithContext(g.ctx))
if err != nil { if err != nil {
if resp != nil && resp.StatusCode == 404 {
log.Error(fmt.Sprintf("GitlabDownloader: while migrating a error occurred: '%s'", err.Error()))
return []*base.Review{}, nil
}
return nil, err return nil, err
} }

View File

@@ -8,13 +8,9 @@ package migrations
import ( import (
"context" "context"
"fmt" "fmt"
"net"
"net/url"
"strings"
"code.gitea.io/gitea/models" "code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/log" "code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/matchlist"
"code.gitea.io/gitea/modules/migrations/base" "code.gitea.io/gitea/modules/migrations/base"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
) )
@@ -24,9 +20,6 @@ type MigrateOptions = base.MigrateOptions
var ( var (
factories []base.DownloaderFactory factories []base.DownloaderFactory
allowList *matchlist.Matchlist
blockList *matchlist.Matchlist
) )
// RegisterDownloaderFactory registers a downloader factory // RegisterDownloaderFactory registers a downloader factory
@@ -34,49 +27,12 @@ func RegisterDownloaderFactory(factory base.DownloaderFactory) {
factories = append(factories, factory) factories = append(factories, factory)
} }
func isMigrateURLAllowed(remoteURL string) error {
u, err := url.Parse(strings.ToLower(remoteURL))
if err != nil {
return err
}
if strings.EqualFold(u.Scheme, "http") || strings.EqualFold(u.Scheme, "https") {
if len(setting.Migrations.AllowedDomains) > 0 {
if !allowList.Match(u.Host) {
return &models.ErrMigrationNotAllowed{Host: u.Host}
}
} else {
if blockList.Match(u.Host) {
return &models.ErrMigrationNotAllowed{Host: u.Host}
}
}
}
if !setting.Migrations.AllowLocalNetworks {
addrList, err := net.LookupIP(strings.Split(u.Host, ":")[0])
if err != nil {
return &models.ErrMigrationNotAllowed{Host: u.Host, NotResolvedIP: true}
}
for _, addr := range addrList {
if isIPPrivate(addr) || !addr.IsGlobalUnicast() {
return &models.ErrMigrationNotAllowed{Host: u.Host, PrivateNet: addr.String()}
}
}
}
return nil
}
// MigrateRepository migrate repository according MigrateOptions // MigrateRepository migrate repository according MigrateOptions
func MigrateRepository(ctx context.Context, doer *models.User, ownerName string, opts base.MigrateOptions) (*models.Repository, error) { func MigrateRepository(ctx context.Context, doer *models.User, ownerName string, opts base.MigrateOptions) (*models.Repository, error) {
err := isMigrateURLAllowed(opts.CloneAddr)
if err != nil {
return nil, err
}
var ( var (
downloader base.Downloader downloader base.Downloader
uploader = NewGiteaLocalUploader(ctx, doer, ownerName, opts.RepoName) uploader = NewGiteaLocalUploader(ctx, doer, ownerName, opts.RepoName)
err error
) )
for _, factory := range factories { for _, factory := range factories {
@@ -113,7 +69,7 @@ func MigrateRepository(ctx context.Context, doer *models.User, ownerName string,
} }
if err2 := models.CreateRepositoryNotice(fmt.Sprintf("Migrate repository from %s failed: %v", opts.OriginalURL, err)); err2 != nil { if err2 := models.CreateRepositoryNotice(fmt.Sprintf("Migrate repository from %s failed: %v", opts.OriginalURL, err)); err2 != nil {
log.Error("create repository notice failed: ", err2) log.Error("create respotiry notice failed: ", err2)
} }
return nil, err return nil, err
} }
@@ -352,32 +308,3 @@ func migrateRepository(downloader base.Downloader, uploader base.Uploader, opts
return nil return nil
} }
// Init migrations service
func Init() error {
var err error
allowList, err = matchlist.NewMatchlist(setting.Migrations.AllowedDomains...)
if err != nil {
return fmt.Errorf("init migration allowList domains failed: %v", err)
}
blockList, err = matchlist.NewMatchlist(setting.Migrations.BlockedDomains...)
if err != nil {
return fmt.Errorf("init migration blockList domains failed: %v", err)
}
return nil
}
// isIPPrivate reports whether ip is a private address, according to
// RFC 1918 (IPv4 addresses) and RFC 4193 (IPv6 addresses).
// from https://github.com/golang/go/pull/42793
// TODO remove if https://github.com/golang/go/issues/29146 got resolved
func isIPPrivate(ip net.IP) bool {
if ip4 := ip.To4(); ip4 != nil {
return ip4[0] == 10 ||
(ip4[0] == 172 && ip4[1]&0xf0 == 16) ||
(ip4[0] == 192 && ip4[1] == 168)
}
return len(ip) == net.IPv6len && ip[0]&0xfe == 0xfc
}

View File

@@ -1,34 +0,0 @@
// Copyright 2019 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package migrations
import (
"testing"
"code.gitea.io/gitea/modules/setting"
"github.com/stretchr/testify/assert"
)
func TestMigrateWhiteBlocklist(t *testing.T) {
setting.Migrations.AllowedDomains = []string{"github.com"}
assert.NoError(t, Init())
err := isMigrateURLAllowed("https://gitlab.com/gitlab/gitlab.git")
assert.Error(t, err)
err = isMigrateURLAllowed("https://github.com/go-gitea/gitea.git")
assert.NoError(t, err)
setting.Migrations.AllowedDomains = []string{}
setting.Migrations.BlockedDomains = []string{"github.com"}
assert.NoError(t, Init())
err = isMigrateURLAllowed("https://gitlab.com/gitlab/gitlab.git")
assert.NoError(t, err)
err = isMigrateURLAllowed("https://github.com/go-gitea/gitea.git")
assert.Error(t, err)
}

View File

@@ -29,7 +29,7 @@ func NewNotifier() base.Notifier {
return &actionNotifier{} return &actionNotifier{}
} }
func (a *actionNotifier) NotifyNewIssue(issue *models.Issue, mentions []*models.User) { func (a *actionNotifier) NotifyNewIssue(issue *models.Issue) {
if err := issue.LoadPoster(); err != nil { if err := issue.LoadPoster(); err != nil {
log.Error("issue.LoadPoster: %v", err) log.Error("issue.LoadPoster: %v", err)
return return
@@ -88,7 +88,7 @@ func (a *actionNotifier) NotifyIssueChangeStatus(doer *models.User, issue *model
// NotifyCreateIssueComment notifies comment on an issue to notifiers // NotifyCreateIssueComment notifies comment on an issue to notifiers
func (a *actionNotifier) NotifyCreateIssueComment(doer *models.User, repo *models.Repository, func (a *actionNotifier) NotifyCreateIssueComment(doer *models.User, repo *models.Repository,
issue *models.Issue, comment *models.Comment, mentions []*models.User) { issue *models.Issue, comment *models.Comment) {
act := &models.Action{ act := &models.Action{
ActUserID: doer.ID, ActUserID: doer.ID,
ActUser: doer, ActUser: doer,
@@ -120,7 +120,7 @@ func (a *actionNotifier) NotifyCreateIssueComment(doer *models.User, repo *model
} }
} }
func (a *actionNotifier) NotifyNewPullRequest(pull *models.PullRequest, mentions []*models.User) { func (a *actionNotifier) NotifyNewPullRequest(pull *models.PullRequest) {
if err := pull.LoadIssue(); err != nil { if err := pull.LoadIssue(); err != nil {
log.Error("pull.LoadIssue: %v", err) log.Error("pull.LoadIssue: %v", err)
return return
@@ -203,7 +203,7 @@ func (a *actionNotifier) NotifyForkRepository(doer *models.User, oldRepo, repo *
} }
} }
func (a *actionNotifier) NotifyPullRequestReview(pr *models.PullRequest, review *models.Review, comment *models.Comment, mentions []*models.User) { func (a *actionNotifier) NotifyPullRequestReview(pr *models.PullRequest, review *models.Review, comment *models.Comment) {
if err := review.LoadReviewer(); err != nil { if err := review.LoadReviewer(); err != nil {
log.Error("LoadReviewer '%d/%d': %v", review.ID, review.ReviewerID, err) log.Error("LoadReviewer '%d/%d': %v", review.ID, review.ReviewerID, err)
return return
@@ -314,7 +314,7 @@ func (a *actionNotifier) NotifySyncDeleteRef(doer *models.User, repo *models.Rep
if err := models.NotifyWatchers(&models.Action{ if err := models.NotifyWatchers(&models.Action{
ActUserID: repo.OwnerID, ActUserID: repo.OwnerID,
ActUser: repo.MustOwner(), ActUser: repo.MustOwner(),
OpType: models.ActionMirrorSyncDelete, OpType: models.ActionMirrorSyncCreate,
RepoID: repo.ID, RepoID: repo.ID,
Repo: repo, Repo: repo,
IsPrivate: repo.IsPrivate, IsPrivate: repo.IsPrivate,

View File

@@ -20,7 +20,7 @@ type Notifier interface {
NotifyRenameRepository(doer *models.User, repo *models.Repository, oldRepoName string) NotifyRenameRepository(doer *models.User, repo *models.Repository, oldRepoName string)
NotifyTransferRepository(doer *models.User, repo *models.Repository, oldOwnerName string) NotifyTransferRepository(doer *models.User, repo *models.Repository, oldOwnerName string)
NotifyNewIssue(issue *models.Issue, mentions []*models.User) NotifyNewIssue(*models.Issue)
NotifyIssueChangeStatus(*models.User, *models.Issue, *models.Comment, bool) NotifyIssueChangeStatus(*models.User, *models.Issue, *models.Comment, bool)
NotifyIssueChangeMilestone(doer *models.User, issue *models.Issue, oldMilestoneID int64) NotifyIssueChangeMilestone(doer *models.User, issue *models.Issue, oldMilestoneID int64)
NotifyIssueChangeAssignee(doer *models.User, issue *models.Issue, assignee *models.User, removed bool, comment *models.Comment) NotifyIssueChangeAssignee(doer *models.User, issue *models.Issue, assignee *models.User, removed bool, comment *models.Comment)
@@ -32,16 +32,15 @@ type Notifier interface {
NotifyIssueChangeLabels(doer *models.User, issue *models.Issue, NotifyIssueChangeLabels(doer *models.User, issue *models.Issue,
addedLabels []*models.Label, removedLabels []*models.Label) addedLabels []*models.Label, removedLabels []*models.Label)
NotifyNewPullRequest(pr *models.PullRequest, mentions []*models.User) NotifyNewPullRequest(*models.PullRequest)
NotifyMergePullRequest(*models.PullRequest, *models.User) NotifyMergePullRequest(*models.PullRequest, *models.User)
NotifyPullRequestSynchronized(doer *models.User, pr *models.PullRequest) NotifyPullRequestSynchronized(doer *models.User, pr *models.PullRequest)
NotifyPullRequestReview(pr *models.PullRequest, review *models.Review, comment *models.Comment, mentions []*models.User) NotifyPullRequestReview(*models.PullRequest, *models.Review, *models.Comment)
NotifyPullRequestCodeComment(pr *models.PullRequest, comment *models.Comment, mentions []*models.User)
NotifyPullRequestChangeTargetBranch(doer *models.User, pr *models.PullRequest, oldBranch string) NotifyPullRequestChangeTargetBranch(doer *models.User, pr *models.PullRequest, oldBranch string)
NotifyPullRequestPushCommits(doer *models.User, pr *models.PullRequest, comment *models.Comment) NotifyPullRequestPushCommits(doer *models.User, pr *models.PullRequest, comment *models.Comment)
NotifyCreateIssueComment(doer *models.User, repo *models.Repository, NotifyCreateIssueComment(*models.User, *models.Repository,
issue *models.Issue, comment *models.Comment, mentions []*models.User) *models.Issue, *models.Comment)
NotifyUpdateComment(*models.User, *models.Comment, string) NotifyUpdateComment(*models.User, *models.Comment, string)
NotifyDeleteComment(*models.User, *models.Comment) NotifyDeleteComment(*models.User, *models.Comment)

View File

@@ -23,11 +23,11 @@ func (*NullNotifier) Run() {
// NotifyCreateIssueComment places a place holder function // NotifyCreateIssueComment places a place holder function
func (*NullNotifier) NotifyCreateIssueComment(doer *models.User, repo *models.Repository, func (*NullNotifier) NotifyCreateIssueComment(doer *models.User, repo *models.Repository,
issue *models.Issue, comment *models.Comment, mentions []*models.User) { issue *models.Issue, comment *models.Comment) {
} }
// NotifyNewIssue places a place holder function // NotifyNewIssue places a place holder function
func (*NullNotifier) NotifyNewIssue(issue *models.Issue, mentions []*models.User) { func (*NullNotifier) NotifyNewIssue(issue *models.Issue) {
} }
// NotifyIssueChangeStatus places a place holder function // NotifyIssueChangeStatus places a place holder function
@@ -35,15 +35,11 @@ func (*NullNotifier) NotifyIssueChangeStatus(doer *models.User, issue *models.Is
} }
// NotifyNewPullRequest places a place holder function // NotifyNewPullRequest places a place holder function
func (*NullNotifier) NotifyNewPullRequest(pr *models.PullRequest, mentions []*models.User) { func (*NullNotifier) NotifyNewPullRequest(pr *models.PullRequest) {
} }
// NotifyPullRequestReview places a place holder function // NotifyPullRequestReview places a place holder function
func (*NullNotifier) NotifyPullRequestReview(pr *models.PullRequest, r *models.Review, comment *models.Comment, mentions []*models.User) { func (*NullNotifier) NotifyPullRequestReview(pr *models.PullRequest, r *models.Review, comment *models.Comment) {
}
// NotifyPullRequestCodeComment places a place holder function
func (*NullNotifier) NotifyPullRequestCodeComment(pr *models.PullRequest, comment *models.Comment, mentions []*models.User) {
} }
// NotifyMergePullRequest places a place holder function // NotifyMergePullRequest places a place holder function

View File

@@ -30,7 +30,7 @@ func NewNotifier() base.Notifier {
} }
func (r *indexerNotifier) NotifyCreateIssueComment(doer *models.User, repo *models.Repository, func (r *indexerNotifier) NotifyCreateIssueComment(doer *models.User, repo *models.Repository,
issue *models.Issue, comment *models.Comment, mentions []*models.User) { issue *models.Issue, comment *models.Comment) {
if comment.Type == models.CommentTypeComment { if comment.Type == models.CommentTypeComment {
if issue.Comments == nil { if issue.Comments == nil {
if err := issue.LoadDiscussComments(); err != nil { if err := issue.LoadDiscussComments(); err != nil {
@@ -45,11 +45,11 @@ func (r *indexerNotifier) NotifyCreateIssueComment(doer *models.User, repo *mode
} }
} }
func (r *indexerNotifier) NotifyNewIssue(issue *models.Issue, mentions []*models.User) { func (r *indexerNotifier) NotifyNewIssue(issue *models.Issue) {
issue_indexer.UpdateIssueIndexer(issue) issue_indexer.UpdateIssueIndexer(issue)
} }
func (r *indexerNotifier) NotifyNewPullRequest(pr *models.PullRequest, mentions []*models.User) { func (r *indexerNotifier) NotifyNewPullRequest(pr *models.PullRequest) {
issue_indexer.UpdateIssueIndexer(pr.Issue) issue_indexer.UpdateIssueIndexer(pr.Issue)
} }

View File

@@ -27,7 +27,7 @@ func NewNotifier() base.Notifier {
} }
func (m *mailNotifier) NotifyCreateIssueComment(doer *models.User, repo *models.Repository, func (m *mailNotifier) NotifyCreateIssueComment(doer *models.User, repo *models.Repository,
issue *models.Issue, comment *models.Comment, mentions []*models.User) { issue *models.Issue, comment *models.Comment) {
var act models.ActionType var act models.ActionType
if comment.Type == models.CommentTypeClose { if comment.Type == models.CommentTypeClose {
act = models.ActionCloseIssue act = models.ActionCloseIssue
@@ -41,13 +41,13 @@ func (m *mailNotifier) NotifyCreateIssueComment(doer *models.User, repo *models.
act = 0 act = 0
} }
if err := mailer.MailParticipantsComment(comment, act, issue, mentions); err != nil { if err := mailer.MailParticipantsComment(comment, act, issue); err != nil {
log.Error("MailParticipantsComment: %v", err) log.Error("MailParticipantsComment: %v", err)
} }
} }
func (m *mailNotifier) NotifyNewIssue(issue *models.Issue, mentions []*models.User) { func (m *mailNotifier) NotifyNewIssue(issue *models.Issue) {
if err := mailer.MailParticipants(issue, issue.Poster, models.ActionCreateIssue, mentions); err != nil { if err := mailer.MailParticipants(issue, issue.Poster, models.ActionCreateIssue); err != nil {
log.Error("MailParticipants: %v", err) log.Error("MailParticipants: %v", err)
} }
} }
@@ -69,18 +69,18 @@ func (m *mailNotifier) NotifyIssueChangeStatus(doer *models.User, issue *models.
} }
} }
if err := mailer.MailParticipants(issue, doer, actionType, nil); err != nil { if err := mailer.MailParticipants(issue, doer, actionType); err != nil {
log.Error("MailParticipants: %v", err) log.Error("MailParticipants: %v", err)
} }
} }
func (m *mailNotifier) NotifyNewPullRequest(pr *models.PullRequest, mentions []*models.User) { func (m *mailNotifier) NotifyNewPullRequest(pr *models.PullRequest) {
if err := mailer.MailParticipants(pr.Issue, pr.Issue.Poster, models.ActionCreatePullRequest, mentions); err != nil { if err := mailer.MailParticipants(pr.Issue, pr.Issue.Poster, models.ActionCreatePullRequest); err != nil {
log.Error("MailParticipants: %v", err) log.Error("MailParticipants: %v", err)
} }
} }
func (m *mailNotifier) NotifyPullRequestReview(pr *models.PullRequest, r *models.Review, comment *models.Comment, mentions []*models.User) { func (m *mailNotifier) NotifyPullRequestReview(pr *models.PullRequest, r *models.Review, comment *models.Comment) {
var act models.ActionType var act models.ActionType
if comment.Type == models.CommentTypeClose { if comment.Type == models.CommentTypeClose {
act = models.ActionCloseIssue act = models.ActionCloseIssue
@@ -89,17 +89,11 @@ func (m *mailNotifier) NotifyPullRequestReview(pr *models.PullRequest, r *models
} else if comment.Type == models.CommentTypeComment { } else if comment.Type == models.CommentTypeComment {
act = models.ActionCommentPull act = models.ActionCommentPull
} }
if err := mailer.MailParticipantsComment(comment, act, pr.Issue, mentions); err != nil { if err := mailer.MailParticipantsComment(comment, act, pr.Issue); err != nil {
log.Error("MailParticipantsComment: %v", err) log.Error("MailParticipantsComment: %v", err)
} }
} }
func (m *mailNotifier) NotifyPullRequestCodeComment(pr *models.PullRequest, comment *models.Comment, mentions []*models.User) {
if err := mailer.MailMentionsComment(pr, comment, mentions); err != nil {
log.Error("MailMentionsComment: %v", err)
}
}
func (m *mailNotifier) NotifyIssueChangeAssignee(doer *models.User, issue *models.Issue, assignee *models.User, removed bool, comment *models.Comment) { func (m *mailNotifier) NotifyIssueChangeAssignee(doer *models.User, issue *models.Issue, assignee *models.User, removed bool, comment *models.Comment) {
// mail only sent to added assignees and not self-assignee // mail only sent to added assignees and not self-assignee
if !removed && doer.ID != assignee.ID && assignee.EmailNotifications() == models.EmailNotificationsEnabled { if !removed && doer.ID != assignee.ID && assignee.EmailNotifications() == models.EmailNotificationsEnabled {
@@ -121,7 +115,7 @@ func (m *mailNotifier) NotifyMergePullRequest(pr *models.PullRequest, doer *mode
return return
} }
pr.Issue.Content = "" pr.Issue.Content = ""
if err := mailer.MailParticipants(pr.Issue, doer, models.ActionMergePullRequest, nil); err != nil { if err := mailer.MailParticipants(pr.Issue, doer, models.ActionMergePullRequest); err != nil {
log.Error("MailParticipants: %v", err) log.Error("MailParticipants: %v", err)
} }
} }
@@ -149,7 +143,7 @@ func (m *mailNotifier) NotifyPullRequestPushCommits(doer *models.User, pr *model
} }
comment.Content = "" comment.Content = ""
m.NotifyCreateIssueComment(doer, comment.Issue.Repo, comment.Issue, comment, nil) m.NotifyCreateIssueComment(doer, comment.Issue.Repo, comment.Issue, comment)
} }
func (m *mailNotifier) NotifyNewRelease(rel *models.Release) { func (m *mailNotifier) NotifyNewRelease(rel *models.Release) {

View File

@@ -39,16 +39,16 @@ func NewContext() {
// NotifyCreateIssueComment notifies issue comment related message to notifiers // NotifyCreateIssueComment notifies issue comment related message to notifiers
func NotifyCreateIssueComment(doer *models.User, repo *models.Repository, func NotifyCreateIssueComment(doer *models.User, repo *models.Repository,
issue *models.Issue, comment *models.Comment, mentions []*models.User) { issue *models.Issue, comment *models.Comment) {
for _, notifier := range notifiers { for _, notifier := range notifiers {
notifier.NotifyCreateIssueComment(doer, repo, issue, comment, mentions) notifier.NotifyCreateIssueComment(doer, repo, issue, comment)
} }
} }
// NotifyNewIssue notifies new issue to notifiers // NotifyNewIssue notifies new issue to notifiers
func NotifyNewIssue(issue *models.Issue, mentions []*models.User) { func NotifyNewIssue(issue *models.Issue) {
for _, notifier := range notifiers { for _, notifier := range notifiers {
notifier.NotifyNewIssue(issue, mentions) notifier.NotifyNewIssue(issue)
} }
} }
@@ -67,9 +67,9 @@ func NotifyMergePullRequest(pr *models.PullRequest, doer *models.User) {
} }
// NotifyNewPullRequest notifies new pull request to notifiers // NotifyNewPullRequest notifies new pull request to notifiers
func NotifyNewPullRequest(pr *models.PullRequest, mentions []*models.User) { func NotifyNewPullRequest(pr *models.PullRequest) {
for _, notifier := range notifiers { for _, notifier := range notifiers {
notifier.NotifyNewPullRequest(pr, mentions) notifier.NotifyNewPullRequest(pr)
} }
} }
@@ -81,16 +81,9 @@ func NotifyPullRequestSynchronized(doer *models.User, pr *models.PullRequest) {
} }
// NotifyPullRequestReview notifies new pull request review // NotifyPullRequestReview notifies new pull request review
func NotifyPullRequestReview(pr *models.PullRequest, review *models.Review, comment *models.Comment, mentions []*models.User) { func NotifyPullRequestReview(pr *models.PullRequest, review *models.Review, comment *models.Comment) {
for _, notifier := range notifiers { for _, notifier := range notifiers {
notifier.NotifyPullRequestReview(pr, review, comment, mentions) notifier.NotifyPullRequestReview(pr, review, comment)
}
}
// NotifyPullRequestCodeComment notifies new pull request code comment
func NotifyPullRequestCodeComment(pr *models.PullRequest, comment *models.Comment, mentions []*models.User) {
for _, notifier := range notifiers {
notifier.NotifyPullRequestCodeComment(pr, comment, mentions)
} }
} }

View File

@@ -51,7 +51,7 @@ func (ns *notificationService) Run() {
} }
func (ns *notificationService) NotifyCreateIssueComment(doer *models.User, repo *models.Repository, func (ns *notificationService) NotifyCreateIssueComment(doer *models.User, repo *models.Repository,
issue *models.Issue, comment *models.Comment, mentions []*models.User) { issue *models.Issue, comment *models.Comment) {
var opts = issueNotificationOpts{ var opts = issueNotificationOpts{
IssueID: issue.ID, IssueID: issue.ID,
NotificationAuthorID: doer.ID, NotificationAuthorID: doer.ID,
@@ -60,31 +60,13 @@ func (ns *notificationService) NotifyCreateIssueComment(doer *models.User, repo
opts.CommentID = comment.ID opts.CommentID = comment.ID
} }
_ = ns.issueQueue.Push(opts) _ = ns.issueQueue.Push(opts)
for _, mention := range mentions {
var opts = issueNotificationOpts{
IssueID: issue.ID,
NotificationAuthorID: doer.ID,
ReceiverID: mention.ID,
}
if comment != nil {
opts.CommentID = comment.ID
}
_ = ns.issueQueue.Push(opts)
}
} }
func (ns *notificationService) NotifyNewIssue(issue *models.Issue, mentions []*models.User) { func (ns *notificationService) NotifyNewIssue(issue *models.Issue) {
_ = ns.issueQueue.Push(issueNotificationOpts{ _ = ns.issueQueue.Push(issueNotificationOpts{
IssueID: issue.ID, IssueID: issue.ID,
NotificationAuthorID: issue.Poster.ID, NotificationAuthorID: issue.Poster.ID,
}) })
for _, mention := range mentions {
_ = ns.issueQueue.Push(issueNotificationOpts{
IssueID: issue.ID,
NotificationAuthorID: issue.Poster.ID,
ReceiverID: mention.ID,
})
}
} }
func (ns *notificationService) NotifyIssueChangeStatus(doer *models.User, issue *models.Issue, actionComment *models.Comment, isClosed bool) { func (ns *notificationService) NotifyIssueChangeStatus(doer *models.User, issue *models.Issue, actionComment *models.Comment, isClosed bool) {
@@ -101,7 +83,7 @@ func (ns *notificationService) NotifyMergePullRequest(pr *models.PullRequest, do
}) })
} }
func (ns *notificationService) NotifyNewPullRequest(pr *models.PullRequest, mentions []*models.User) { func (ns *notificationService) NotifyNewPullRequest(pr *models.PullRequest) {
if err := pr.LoadIssue(); err != nil { if err := pr.LoadIssue(); err != nil {
log.Error("Unable to load issue: %d for pr: %d: Error: %v", pr.IssueID, pr.ID, err) log.Error("Unable to load issue: %d for pr: %d: Error: %v", pr.IssueID, pr.ID, err)
return return
@@ -110,16 +92,9 @@ func (ns *notificationService) NotifyNewPullRequest(pr *models.PullRequest, ment
IssueID: pr.Issue.ID, IssueID: pr.Issue.ID,
NotificationAuthorID: pr.Issue.PosterID, NotificationAuthorID: pr.Issue.PosterID,
}) })
for _, mention := range mentions {
_ = ns.issueQueue.Push(issueNotificationOpts{
IssueID: pr.Issue.ID,
NotificationAuthorID: pr.Issue.PosterID,
ReceiverID: mention.ID,
})
}
} }
func (ns *notificationService) NotifyPullRequestReview(pr *models.PullRequest, r *models.Review, c *models.Comment, mentions []*models.User) { func (ns *notificationService) NotifyPullRequestReview(pr *models.PullRequest, r *models.Review, c *models.Comment) {
var opts = issueNotificationOpts{ var opts = issueNotificationOpts{
IssueID: pr.Issue.ID, IssueID: pr.Issue.ID,
NotificationAuthorID: r.Reviewer.ID, NotificationAuthorID: r.Reviewer.ID,
@@ -128,28 +103,6 @@ func (ns *notificationService) NotifyPullRequestReview(pr *models.PullRequest, r
opts.CommentID = c.ID opts.CommentID = c.ID
} }
_ = ns.issueQueue.Push(opts) _ = ns.issueQueue.Push(opts)
for _, mention := range mentions {
var opts = issueNotificationOpts{
IssueID: pr.Issue.ID,
NotificationAuthorID: r.Reviewer.ID,
ReceiverID: mention.ID,
}
if c != nil {
opts.CommentID = c.ID
}
_ = ns.issueQueue.Push(opts)
}
}
func (ns *notificationService) NotifyPullRequestCodeComment(pr *models.PullRequest, c *models.Comment, mentions []*models.User) {
for _, mention := range mentions {
_ = ns.issueQueue.Push(issueNotificationOpts{
IssueID: pr.Issue.ID,
NotificationAuthorID: c.Poster.ID,
CommentID: c.ID,
ReceiverID: mention.ID,
})
}
} }
func (ns *notificationService) NotifyPullRequestPushCommits(doer *models.User, pr *models.PullRequest, comment *models.Comment) { func (ns *notificationService) NotifyPullRequestPushCommits(doer *models.User, pr *models.PullRequest, comment *models.Comment) {

View File

@@ -249,7 +249,7 @@ func (m *webhookNotifier) NotifyIssueChangeStatus(doer *models.User, issue *mode
} }
} }
func (m *webhookNotifier) NotifyNewIssue(issue *models.Issue, mentions []*models.User) { func (m *webhookNotifier) NotifyNewIssue(issue *models.Issue) {
if err := issue.LoadRepo(); err != nil { if err := issue.LoadRepo(); err != nil {
log.Error("issue.LoadRepo: %v", err) log.Error("issue.LoadRepo: %v", err)
return return
@@ -271,7 +271,7 @@ func (m *webhookNotifier) NotifyNewIssue(issue *models.Issue, mentions []*models
} }
} }
func (m *webhookNotifier) NotifyNewPullRequest(pull *models.PullRequest, mentions []*models.User) { func (m *webhookNotifier) NotifyNewPullRequest(pull *models.PullRequest) {
if err := pull.LoadIssue(); err != nil { if err := pull.LoadIssue(); err != nil {
log.Error("pull.LoadIssue: %v", err) log.Error("pull.LoadIssue: %v", err)
return return
@@ -387,7 +387,7 @@ func (m *webhookNotifier) NotifyUpdateComment(doer *models.User, c *models.Comme
} }
func (m *webhookNotifier) NotifyCreateIssueComment(doer *models.User, repo *models.Repository, func (m *webhookNotifier) NotifyCreateIssueComment(doer *models.User, repo *models.Repository,
issue *models.Issue, comment *models.Comment, mentions []*models.User) { issue *models.Issue, comment *models.Comment) {
mode, _ := models.AccessLevel(doer, repo) mode, _ := models.AccessLevel(doer, repo)
var err error var err error
@@ -639,7 +639,7 @@ func (m *webhookNotifier) NotifyPullRequestChangeTargetBranch(doer *models.User,
} }
} }
func (m *webhookNotifier) NotifyPullRequestReview(pr *models.PullRequest, review *models.Review, comment *models.Comment, mentions []*models.User) { func (m *webhookNotifier) NotifyPullRequestReview(pr *models.PullRequest, review *models.Review, comment *models.Comment) {
var reviewHookType models.HookEventType var reviewHookType models.HookEventType
switch review.Type { switch review.Type {
@@ -797,11 +797,3 @@ func (m *webhookNotifier) NotifySyncPushCommits(pusher *models.User, repo *model
log.Error("PrepareWebhooks: %v", err) log.Error("PrepareWebhooks: %v", err)
} }
} }
func (m *webhookNotifier) NotifySyncCreateRef(pusher *models.User, repo *models.Repository, refType, refFullName string) {
m.NotifyCreateRef(pusher, repo, refType, refFullName)
}
func (m *webhookNotifier) NotifySyncDeleteRef(pusher *models.User, repo *models.Repository, refType, refFullName string) {
m.NotifyDeleteRef(pusher, repo, refType, refFullName)
}

View File

@@ -38,7 +38,6 @@ var KnownPublicEntries = []string{
"js", "js",
"serviceworker.js", "serviceworker.js",
"vendor", "vendor",
"favicon.ico",
} }
// Custom implements the macaron static handler for serving custom assets. // Custom implements the macaron static handler for serving custom assets.

View File

@@ -235,78 +235,40 @@ func findAllIssueReferencesMarkdown(content string) []*rawReference {
return findAllIssueReferencesBytes(bcontent, links) return findAllIssueReferencesBytes(bcontent, links)
} }
func convertFullHTMLReferencesToShortRefs(re *regexp.Regexp, contentBytes *[]byte) {
// We will iterate through the content, rewrite and simplify full references.
//
// We want to transform something like:
//
// this is a https://ourgitea.com/git/owner/repo/issues/123456789, foo
// https://ourgitea.com/git/owner/repo/pulls/123456789
//
// Into something like:
//
// this is a #123456789, foo
// !123456789
pos := 0
for {
// re looks for something like: (\s|^|\(|\[)https://ourgitea.com/git/(owner/repo)/(issues)/(123456789)(?:\s|$|\)|\]|[:;,.?!]\s|[:;,.?!]$)
match := re.FindSubmatchIndex((*contentBytes)[pos:])
if match == nil {
break
}
// match is a bunch of indices into the content from pos onwards so
// to simplify things let's just add pos to all of the indices in match
for i := range match {
match[i] += pos
}
// match[0]-match[1] is whole string
// match[2]-match[3] is preamble
// move the position to the end of the preamble
pos = match[3]
// match[4]-match[5] is owner/repo
// now copy the owner/repo to end of the preamble
endPos := pos + match[5] - match[4]
copy((*contentBytes)[pos:endPos], (*contentBytes)[match[4]:match[5]])
// move the current position to the end of the newly copied owner/repo
pos = endPos
// Now set the issue/pull marker:
//
// match[6]-match[7] == 'issues'
(*contentBytes)[pos] = '#'
if string((*contentBytes)[match[6]:match[7]]) == "pulls" {
(*contentBytes)[pos] = '!'
}
pos++
// Then add the issue/pull number
//
// match[8]-match[9] is the number
endPos = pos + match[9] - match[8]
copy((*contentBytes)[pos:endPos], (*contentBytes)[match[8]:match[9]])
// Now copy what's left at the end of the string to the new end position
copy((*contentBytes)[endPos:], (*contentBytes)[match[9]:])
// now we reset the length
// our new section has length endPos - match[3]
// our old section has length match[9] - match[3]
(*contentBytes) = (*contentBytes)[:len((*contentBytes))-match[9]+endPos]
pos = endPos
}
}
// FindAllIssueReferences returns a list of unvalidated references found in a string. // FindAllIssueReferences returns a list of unvalidated references found in a string.
func FindAllIssueReferences(content string) []IssueReference { func FindAllIssueReferences(content string) []IssueReference {
// Need to convert fully qualified html references to local system to #/! short codes // Need to convert fully qualified html references to local system to #/! short codes
contentBytes := []byte(content) contentBytes := []byte(content)
if re := getGiteaIssuePullPattern(); re != nil { if re := getGiteaIssuePullPattern(); re != nil {
convertFullHTMLReferencesToShortRefs(re, &contentBytes) pos := 0
for {
match := re.FindSubmatchIndex(contentBytes[pos:])
if match == nil {
break
}
// match[0]-match[1] is whole string
// match[2]-match[3] is preamble
pos += match[3]
// match[4]-match[5] is owner/repo
endPos := pos + match[5] - match[4]
copy(contentBytes[pos:endPos], contentBytes[match[4]:match[5]])
pos = endPos
// match[6]-match[7] == 'issues'
contentBytes[pos] = '#'
if string(contentBytes[match[6]:match[7]]) == "pulls" {
contentBytes[pos] = '!'
}
pos++
// match[8]-match[9] is the number
endPos = pos + match[9] - match[8]
copy(contentBytes[pos:endPos], contentBytes[match[8]:match[9]])
copy(contentBytes[endPos:], contentBytes[match[9]:])
// now we reset the length
// our new section has length endPos - match[3]
// our old section has length match[9] - match[3]
contentBytes = contentBytes[:len(contentBytes)-match[9]+endPos]
pos = endPos
}
} else { } else {
log.Debug("No GiteaIssuePullPattern pattern") log.Debug("No GiteaIssuePullPattern pattern")
} }

View File

@@ -5,7 +5,6 @@
package references package references
import ( import (
"regexp"
"testing" "testing"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
@@ -30,26 +29,6 @@ type testResult struct {
TimeLog string TimeLog string
} }
func TestConvertFullHTMLReferencesToShortRefs(t *testing.T) {
re := regexp.MustCompile(`(\s|^|\(|\[)` +
regexp.QuoteMeta("https://ourgitea.com/git/") +
`([0-9a-zA-Z-_\.]+/[0-9a-zA-Z-_\.]+)/` +
`((?:issues)|(?:pulls))/([0-9]+)(?:\s|$|\)|\]|[:;,.?!]\s|[:;,.?!]$)`)
test := `this is a https://ourgitea.com/git/owner/repo/issues/123456789, foo
https://ourgitea.com/git/owner/repo/pulls/123456789
And https://ourgitea.com/git/owner/repo/pulls/123
`
expect := `this is a owner/repo#123456789, foo
owner/repo!123456789
And owner/repo!123
`
contentBytes := []byte(test)
convertFullHTMLReferencesToShortRefs(re, &contentBytes)
result := string(contentBytes)
assert.EqualValues(t, expect, result)
}
func TestFindAllIssueReferences(t *testing.T) { func TestFindAllIssueReferences(t *testing.T) {
fixtures := []testFixture{ fixtures := []testFixture{
@@ -127,13 +106,6 @@ func TestFindAllIssueReferences(t *testing.T) {
{202, "user4", "repo5", "202", true, XRefActionNone, nil, nil, ""}, {202, "user4", "repo5", "202", true, XRefActionNone, nil, nil, ""},
}, },
}, },
{
"This http://gitea.com:3000/user4/repo5/pulls/202 yes. http://gitea.com:3000/user4/repo5/pulls/203 no",
[]testResult{
{202, "user4", "repo5", "202", true, XRefActionNone, nil, nil, ""},
{203, "user4", "repo5", "203", true, XRefActionNone, nil, nil, ""},
},
},
{ {
"This http://GiTeA.COM:3000/user4/repo6/pulls/205 yes.", "This http://GiTeA.COM:3000/user4/repo6/pulls/205 yes.",
[]testResult{ []testResult{

View File

@@ -29,13 +29,6 @@ func CreateRepository(doer, u *models.User, opts models.CreateRepoOptions) (*mod
opts.DefaultBranch = setting.Repository.DefaultBranch opts.DefaultBranch = setting.Repository.DefaultBranch
} }
// Check if label template exist
if len(opts.IssueLabels) > 0 {
if _, err := models.GetLabelTemplateFile(opts.IssueLabels); err != nil {
return nil, err
}
}
repo := &models.Repository{ repo := &models.Repository{
OwnerID: u.ID, OwnerID: u.ID,
Owner: u, Owner: u,
@@ -54,8 +47,6 @@ func CreateRepository(doer, u *models.User, opts models.CreateRepoOptions) (*mod
TrustModel: opts.TrustModel, TrustModel: opts.TrustModel,
} }
var rollbackRepo *models.Repository
if err := models.WithTx(func(ctx models.DBContext) error { if err := models.WithTx(func(ctx models.DBContext) error {
if err := models.CreateRepository(ctx, doer, u, repo, false); err != nil { if err := models.CreateRepository(ctx, doer, u, repo, false); err != nil {
return err return err
@@ -94,8 +85,9 @@ func CreateRepository(doer, u *models.User, opts models.CreateRepoOptions) (*mod
// Initialize Issue Labels if selected // Initialize Issue Labels if selected
if len(opts.IssueLabels) > 0 { if len(opts.IssueLabels) > 0 {
if err := models.InitializeLabels(ctx, repo.ID, opts.IssueLabels, false); err != nil { if err := models.InitializeLabels(ctx, repo.ID, opts.IssueLabels, false); err != nil {
rollbackRepo = repo if errDelete := models.DeleteRepository(doer, u.ID, repo.ID); errDelete != nil {
rollbackRepo.OwnerID = u.ID log.Error("Rollback deleteRepository: %v", errDelete)
}
return fmt.Errorf("InitializeLabels: %v", err) return fmt.Errorf("InitializeLabels: %v", err)
} }
} }
@@ -104,18 +96,13 @@ func CreateRepository(doer, u *models.User, opts models.CreateRepoOptions) (*mod
SetDescription(fmt.Sprintf("CreateRepository(git update-server-info): %s", repoPath)). SetDescription(fmt.Sprintf("CreateRepository(git update-server-info): %s", repoPath)).
RunInDir(repoPath); err != nil { RunInDir(repoPath); err != nil {
log.Error("CreateRepository(git update-server-info) in %v: Stdout: %s\nError: %v", repo, stdout, err) log.Error("CreateRepository(git update-server-info) in %v: Stdout: %s\nError: %v", repo, stdout, err)
rollbackRepo = repo if errDelete := models.DeleteRepository(doer, u.ID, repo.ID); errDelete != nil {
rollbackRepo.OwnerID = u.ID log.Error("Rollback deleteRepository: %v", errDelete)
}
return fmt.Errorf("CreateRepository(git update-server-info): %v", err) return fmt.Errorf("CreateRepository(git update-server-info): %v", err)
} }
return nil return nil
}); err != nil { }); err != nil {
if rollbackRepo != nil {
if errDelete := models.DeleteRepository(doer, rollbackRepo.OwnerID, rollbackRepo.ID); errDelete != nil {
log.Error("Rollback deleteRepository: %v", errDelete)
}
}
return nil, err return nil, err
} }

View File

@@ -162,10 +162,10 @@ func initRepoCommit(tmpPath string, repo *models.Repository, u *models.User, def
defaultBranch = setting.Repository.DefaultBranch defaultBranch = setting.Repository.DefaultBranch
} }
if stdout, err := git.NewCommand("push", "origin", "HEAD:"+defaultBranch). if stdout, err := git.NewCommand("push", "origin", "master:"+defaultBranch).
SetDescription(fmt.Sprintf("initRepoCommit (git push): %s", tmpPath)). SetDescription(fmt.Sprintf("initRepoCommit (git push): %s", tmpPath)).
RunInDirWithEnv(tmpPath, models.InternalPushingEnvironment(u, repo)); err != nil { RunInDirWithEnv(tmpPath, models.InternalPushingEnvironment(u, repo)); err != nil {
log.Error("Failed to push back to HEAD: Stdout: %s\nError: %v", stdout, err) log.Error("Failed to push back to master: Stdout: %s\nError: %v", stdout, err)
return fmt.Errorf("git push: %v", err) return fmt.Errorf("git push: %v", err)
} }

View File

@@ -5,7 +5,6 @@
package repository package repository
import ( import (
"context"
"fmt" "fmt"
"path" "path"
"strings" "strings"
@@ -42,7 +41,7 @@ func WikiRemoteURL(remote string) string {
} }
// MigrateRepositoryGitData starts migrating git related data after created migrating repository // MigrateRepositoryGitData starts migrating git related data after created migrating repository
func MigrateRepositoryGitData(ctx context.Context, u *models.User, repo *models.Repository, opts migration.MigrateOptions) (*models.Repository, error) { func MigrateRepositoryGitData(doer, u *models.User, repo *models.Repository, opts migration.MigrateOptions) (*models.Repository, error) {
repoPath := models.RepoPath(u.Name, opts.RepoName) repoPath := models.RepoPath(u.Name, opts.RepoName)
if u.IsOrganization() { if u.IsOrganization() {
@@ -62,7 +61,7 @@ func MigrateRepositoryGitData(ctx context.Context, u *models.User, repo *models.
return repo, fmt.Errorf("Failed to remove %s: %v", repoPath, err) return repo, fmt.Errorf("Failed to remove %s: %v", repoPath, err)
} }
if err = git.CloneWithContext(ctx, opts.CloneAddr, repoPath, git.CloneRepoOptions{ if err = git.Clone(opts.CloneAddr, repoPath, git.CloneRepoOptions{
Mirror: true, Mirror: true,
Quiet: true, Quiet: true,
Timeout: migrateTimeout, Timeout: migrateTimeout,
@@ -78,7 +77,7 @@ func MigrateRepositoryGitData(ctx context.Context, u *models.User, repo *models.
return repo, fmt.Errorf("Failed to remove %s: %v", wikiPath, err) return repo, fmt.Errorf("Failed to remove %s: %v", wikiPath, err)
} }
if err = git.CloneWithContext(ctx, wikiRemotePath, wikiPath, git.CloneRepoOptions{ if err = git.Clone(wikiRemotePath, wikiPath, git.CloneRepoOptions{
Mirror: true, Mirror: true,
Quiet: true, Quiet: true,
Timeout: migrateTimeout, Timeout: migrateTimeout,

View File

@@ -62,11 +62,6 @@ func InitDBConfig() {
sec := Cfg.Section("database") sec := Cfg.Section("database")
Database.Type = sec.Key("DB_TYPE").String() Database.Type = sec.Key("DB_TYPE").String()
defaultCharset := "utf8" defaultCharset := "utf8"
Database.UseMySQL = false
Database.UseSQLite3 = false
Database.UsePostgreSQL = false
Database.UseMSSQL = false
switch Database.Type { switch Database.Type {
case "sqlite3": case "sqlite3":
Database.UseSQLite3 = true Database.UseSQLite3 = true

View File

@@ -4,18 +4,11 @@
package setting package setting
import (
"strings"
)
var ( var (
// Migrations settings // Migrations settings
Migrations = struct { Migrations = struct {
MaxAttempts int MaxAttempts int
RetryBackoff int RetryBackoff int
AllowedDomains []string
BlockedDomains []string
AllowLocalNetworks bool
}{ }{
MaxAttempts: 3, MaxAttempts: 3,
RetryBackoff: 3, RetryBackoff: 3,
@@ -26,15 +19,4 @@ func newMigrationsService() {
sec := Cfg.Section("migrations") sec := Cfg.Section("migrations")
Migrations.MaxAttempts = sec.Key("MAX_ATTEMPTS").MustInt(Migrations.MaxAttempts) Migrations.MaxAttempts = sec.Key("MAX_ATTEMPTS").MustInt(Migrations.MaxAttempts)
Migrations.RetryBackoff = sec.Key("RETRY_BACKOFF").MustInt(Migrations.RetryBackoff) Migrations.RetryBackoff = sec.Key("RETRY_BACKOFF").MustInt(Migrations.RetryBackoff)
Migrations.AllowedDomains = sec.Key("ALLOWED_DOMAINS").Strings(",")
for i := range Migrations.AllowedDomains {
Migrations.AllowedDomains[i] = strings.ToLower(Migrations.AllowedDomains[i])
}
Migrations.BlockedDomains = sec.Key("BLOCKED_DOMAINS").Strings(",")
for i := range Migrations.BlockedDomains {
Migrations.BlockedDomains[i] = strings.ToLower(Migrations.BlockedDomains[i])
}
Migrations.AllowLocalNetworks = sec.Key("ALLOW_LOCALNETWORKS").MustBool(false)
} }

View File

@@ -143,7 +143,7 @@ var (
MaxCreationLimit: -1, MaxCreationLimit: -1,
MirrorQueueLength: 1000, MirrorQueueLength: 1000,
PullRequestQueueLength: 1000, PullRequestQueueLength: 1000,
PreferredLicenses: []string{"Apache License 2.0", "MIT License"}, PreferredLicenses: []string{"Apache License 2.0,MIT License"},
DisableHTTPGit: false, DisableHTTPGit: false,
AccessControlAllowOrigin: "", AccessControlAllowOrigin: "",
UseCompatSSHURI: false, UseCompatSSHURI: false,

View File

@@ -21,7 +21,7 @@ type Storage struct {
// MapTo implements the Mappable interface // MapTo implements the Mappable interface
func (s *Storage) MapTo(v interface{}) error { func (s *Storage) MapTo(v interface{}) error {
pathValue := reflect.ValueOf(v).Elem().FieldByName("Path") pathValue := reflect.ValueOf(v).FieldByName("Path")
if pathValue.IsValid() && pathValue.Kind() == reflect.String { if pathValue.IsValid() && pathValue.Kind() == reflect.String {
pathValue.SetString(s.Path) pathValue.SetString(s.Path)
} }
@@ -31,10 +31,24 @@ func (s *Storage) MapTo(v interface{}) error {
return nil return nil
} }
func getStorage(name, typ string, targetSec *ini.Section) Storage { func getStorage(name, typ string, overrides ...*ini.Section) Storage {
const sectionName = "storage" sectionName := "storage"
if len(name) > 0 {
sectionName = sectionName + "." + typ
}
sec := Cfg.Section(sectionName) sec := Cfg.Section(sectionName)
if len(overrides) == 0 {
overrides = []*ini.Section{
Cfg.Section(sectionName + "." + name),
}
}
var storage Storage
storage.Type = sec.Key("STORAGE_TYPE").MustString("")
storage.ServeDirect = sec.Key("SERVE_DIRECT").MustBool(false)
// Global Defaults // Global Defaults
sec.Key("MINIO_ENDPOINT").MustString("localhost:9000") sec.Key("MINIO_ENDPOINT").MustString("localhost:9000")
sec.Key("MINIO_ACCESS_KEY_ID").MustString("") sec.Key("MINIO_ACCESS_KEY_ID").MustString("")
@@ -43,37 +57,17 @@ func getStorage(name, typ string, targetSec *ini.Section) Storage {
sec.Key("MINIO_LOCATION").MustString("us-east-1") sec.Key("MINIO_LOCATION").MustString("us-east-1")
sec.Key("MINIO_USE_SSL").MustBool(false) sec.Key("MINIO_USE_SSL").MustBool(false)
var storage Storage storage.Section = sec
storage.Section = targetSec
storage.Type = typ
overrides := make([]*ini.Section, 0, 3)
nameSec, err := Cfg.GetSection(sectionName + "." + name)
if err == nil {
overrides = append(overrides, nameSec)
}
typeSec, err := Cfg.GetSection(sectionName + "." + typ)
if err == nil {
overrides = append(overrides, typeSec)
nextType := typeSec.Key("STORAGE_TYPE").String()
if len(nextType) > 0 {
storage.Type = nextType // Support custom STORAGE_TYPE
}
}
overrides = append(overrides, sec)
for _, override := range overrides { for _, override := range overrides {
for _, key := range override.Keys() { for _, key := range storage.Section.Keys() {
if !targetSec.HasKey(key.Name()) { if !override.HasKey(key.Name()) {
_, _ = targetSec.NewKey(key.Name(), key.Value()) _, _ = override.NewKey(key.Name(), key.Value())
} }
} }
if len(storage.Type) == 0 { storage.ServeDirect = override.Key("SERVE_DIRECT").MustBool(false)
storage.Type = override.Key("STORAGE_TYPE").String() storage.Section = override
}
} }
storage.ServeDirect = storage.Section.Key("SERVE_DIRECT").MustBool(false)
// Specific defaults // Specific defaults
storage.Path = storage.Section.Key("PATH").MustString(filepath.Join(AppDataPath, name)) storage.Path = storage.Section.Key("PATH").MustString(filepath.Join(AppDataPath, name))

View File

@@ -1,197 +0,0 @@
// Copyright 2020 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package setting
import (
"testing"
"github.com/stretchr/testify/assert"
ini "gopkg.in/ini.v1"
)
func Test_getStorageCustomType(t *testing.T) {
iniStr := `
[attachment]
STORAGE_TYPE = my_minio
MINIO_BUCKET = gitea-attachment
[storage.my_minio]
STORAGE_TYPE = minio
MINIO_ENDPOINT = my_minio:9000
`
Cfg, _ = ini.Load([]byte(iniStr))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "minio", storage.Type)
assert.EqualValues(t, "my_minio:9000", storage.Section.Key("MINIO_ENDPOINT").String())
assert.EqualValues(t, "gitea-attachment", storage.Section.Key("MINIO_BUCKET").String())
}
func Test_getStorageNameSectionOverridesTypeSection(t *testing.T) {
iniStr := `
[attachment]
STORAGE_TYPE = minio
[storage.attachments]
MINIO_BUCKET = gitea-attachment
[storage.minio]
MINIO_BUCKET = gitea
`
Cfg, _ = ini.Load([]byte(iniStr))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "minio", storage.Type)
assert.EqualValues(t, "gitea-attachment", storage.Section.Key("MINIO_BUCKET").String())
}
func Test_getStorageTypeSectionOverridesStorageSection(t *testing.T) {
iniStr := `
[attachment]
STORAGE_TYPE = minio
[storage.minio]
MINIO_BUCKET = gitea-minio
[storage]
MINIO_BUCKET = gitea
`
Cfg, _ = ini.Load([]byte(iniStr))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "minio", storage.Type)
assert.EqualValues(t, "gitea-minio", storage.Section.Key("MINIO_BUCKET").String())
}
func Test_getStorageSpecificOverridesStorage(t *testing.T) {
iniStr := `
[attachment]
STORAGE_TYPE = minio
MINIO_BUCKET = gitea-attachment
[storage.attachments]
MINIO_BUCKET = gitea
[storage]
STORAGE_TYPE = local
`
Cfg, _ = ini.Load([]byte(iniStr))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "minio", storage.Type)
assert.EqualValues(t, "gitea-attachment", storage.Section.Key("MINIO_BUCKET").String())
}
func Test_getStorageGetDefaults(t *testing.T) {
Cfg, _ = ini.Load([]byte(""))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "gitea", storage.Section.Key("MINIO_BUCKET").String())
}
func Test_getStorageMultipleName(t *testing.T) {
iniStr := `
[lfs]
MINIO_BUCKET = gitea-lfs
[attachment]
MINIO_BUCKET = gitea-attachment
[storage]
MINIO_BUCKET = gitea-storage
`
Cfg, _ = ini.Load([]byte(iniStr))
{
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "gitea-attachment", storage.Section.Key("MINIO_BUCKET").String())
}
{
sec := Cfg.Section("lfs")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("lfs", storageType, sec)
assert.EqualValues(t, "gitea-lfs", storage.Section.Key("MINIO_BUCKET").String())
}
{
sec := Cfg.Section("avatar")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("avatars", storageType, sec)
assert.EqualValues(t, "gitea-storage", storage.Section.Key("MINIO_BUCKET").String())
}
}
func Test_getStorageUseOtherNameAsType(t *testing.T) {
iniStr := `
[attachment]
STORAGE_TYPE = lfs
[storage.lfs]
MINIO_BUCKET = gitea-storage
`
Cfg, _ = ini.Load([]byte(iniStr))
{
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "gitea-storage", storage.Section.Key("MINIO_BUCKET").String())
}
{
sec := Cfg.Section("lfs")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("lfs", storageType, sec)
assert.EqualValues(t, "gitea-storage", storage.Section.Key("MINIO_BUCKET").String())
}
}
func Test_getStorageInheritStorageType(t *testing.T) {
iniStr := `
[storage]
STORAGE_TYPE = minio
`
Cfg, _ = ini.Load([]byte(iniStr))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "minio", storage.Type)
}
func Test_getStorageInheritNameSectionType(t *testing.T) {
iniStr := `
[storage.attachments]
STORAGE_TYPE = minio
`
Cfg, _ = ini.Load([]byte(iniStr))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "minio", storage.Type)
}

View File

@@ -196,17 +196,13 @@ func publicKeyHandler(ctx ssh.Context, key ssh.PublicKey) bool {
// Listen starts a SSH server listens on given port. // Listen starts a SSH server listens on given port.
func Listen(host string, port int, ciphers []string, keyExchanges []string, macs []string) { func Listen(host string, port int, ciphers []string, keyExchanges []string, macs []string) {
// TODO: Handle ciphers, keyExchanges, and macs
srv := ssh.Server{ srv := ssh.Server{
Addr: fmt.Sprintf("%s:%d", host, port), Addr: fmt.Sprintf("%s:%d", host, port),
PublicKeyHandler: publicKeyHandler, PublicKeyHandler: publicKeyHandler,
Handler: sessionHandler, Handler: sessionHandler,
ServerConfigCallback: func(ctx ssh.Context) *gossh.ServerConfig {
config := &gossh.ServerConfig{}
config.KeyExchanges = keyExchanges
config.MACs = macs
config.Ciphers = ciphers
return config
},
// We need to explicitly disable the PtyCallback so text displays // We need to explicitly disable the PtyCallback so text displays
// properly. // properly.
PtyCallback: func(ctx ssh.Context, pty ssh.Pty) bool { PtyCallback: func(ctx ssh.Context, pty ssh.Pty) bool {

View File

@@ -11,7 +11,6 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/util" "code.gitea.io/gitea/modules/util"
) )
@@ -40,7 +39,7 @@ func NewLocalStorage(ctx context.Context, cfg interface{}) (ObjectStorage, error
return nil, err return nil, err
} }
config := configInterface.(LocalStorageConfig) config := configInterface.(LocalStorageConfig)
log.Info("Creating new Local Storage at %s", config.Path)
if err := os.MkdirAll(config.Path, os.ModePerm); err != nil { if err := os.MkdirAll(config.Path, os.ModePerm); err != nil {
return nil, err return nil, err
} }

View File

@@ -13,7 +13,6 @@ import (
"strings" "strings"
"time" "time"
"code.gitea.io/gitea/modules/log"
"github.com/minio/minio-go/v7" "github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials" "github.com/minio/minio-go/v7/pkg/credentials"
) )
@@ -31,7 +30,7 @@ type minioObject struct {
func (m *minioObject) Stat() (os.FileInfo, error) { func (m *minioObject) Stat() (os.FileInfo, error) {
oi, err := m.Object.Stat() oi, err := m.Object.Stat()
if err != nil { if err != nil {
return nil, convertMinioErr(err) return nil, err
} }
return &minioFileInfo{oi}, nil return &minioFileInfo{oi}, nil
@@ -59,41 +58,20 @@ type MinioStorage struct {
basePath string basePath string
} }
func convertMinioErr(err error) error {
if err == nil {
return nil
}
errResp, ok := err.(minio.ErrorResponse)
if !ok {
return err
}
// Convert two responses to standard analogues
switch errResp.Code {
case "NoSuchKey":
return os.ErrNotExist
case "AccessDenied":
return os.ErrPermission
}
return err
}
// NewMinioStorage returns a minio storage // NewMinioStorage returns a minio storage
func NewMinioStorage(ctx context.Context, cfg interface{}) (ObjectStorage, error) { func NewMinioStorage(ctx context.Context, cfg interface{}) (ObjectStorage, error) {
configInterface, err := toConfig(MinioStorageConfig{}, cfg) configInterface, err := toConfig(MinioStorageConfig{}, cfg)
if err != nil { if err != nil {
return nil, convertMinioErr(err) return nil, err
} }
config := configInterface.(MinioStorageConfig) config := configInterface.(MinioStorageConfig)
log.Info("Creating Minio storage at %s:%s with base path %s", config.Endpoint, config.Bucket, config.BasePath)
minioClient, err := minio.New(config.Endpoint, &minio.Options{ minioClient, err := minio.New(config.Endpoint, &minio.Options{
Creds: credentials.NewStaticV4(config.AccessKeyID, config.SecretAccessKey, ""), Creds: credentials.NewStaticV4(config.AccessKeyID, config.SecretAccessKey, ""),
Secure: config.UseSSL, Secure: config.UseSSL,
}) })
if err != nil { if err != nil {
return nil, convertMinioErr(err) return nil, err
} }
if err := minioClient.MakeBucket(ctx, config.Bucket, minio.MakeBucketOptions{ if err := minioClient.MakeBucket(ctx, config.Bucket, minio.MakeBucketOptions{
@@ -102,7 +80,7 @@ func NewMinioStorage(ctx context.Context, cfg interface{}) (ObjectStorage, error
// Check to see if we already own this bucket (which happens if you run this twice) // Check to see if we already own this bucket (which happens if you run this twice)
exists, errBucketExists := minioClient.BucketExists(ctx, config.Bucket) exists, errBucketExists := minioClient.BucketExists(ctx, config.Bucket)
if !exists || errBucketExists != nil { if !exists || errBucketExists != nil {
return nil, convertMinioErr(err) return nil, err
} }
} }
@@ -123,7 +101,7 @@ func (m *MinioStorage) Open(path string) (Object, error) {
var opts = minio.GetObjectOptions{} var opts = minio.GetObjectOptions{}
object, err := m.client.GetObject(m.ctx, m.bucket, m.buildMinioPath(path), opts) object, err := m.client.GetObject(m.ctx, m.bucket, m.buildMinioPath(path), opts)
if err != nil { if err != nil {
return nil, convertMinioErr(err) return nil, err
} }
return &minioObject{object}, nil return &minioObject{object}, nil
} }
@@ -139,7 +117,7 @@ func (m *MinioStorage) Save(path string, r io.Reader) (int64, error) {
minio.PutObjectOptions{ContentType: "application/octet-stream"}, minio.PutObjectOptions{ContentType: "application/octet-stream"},
) )
if err != nil { if err != nil {
return 0, convertMinioErr(err) return 0, err
} }
return uploadInfo.Size, nil return uploadInfo.Size, nil
} }
@@ -186,17 +164,14 @@ func (m *MinioStorage) Stat(path string) (os.FileInfo, error) {
return nil, os.ErrNotExist return nil, os.ErrNotExist
} }
} }
return nil, convertMinioErr(err) return nil, err
} }
return &minioFileInfo{info}, nil return &minioFileInfo{info}, nil
} }
// Delete delete a file // Delete delete a file
func (m *MinioStorage) Delete(path string) error { func (m *MinioStorage) Delete(path string) error {
if err := m.client.RemoveObject(m.ctx, m.bucket, m.buildMinioPath(path), minio.RemoveObjectOptions{}); err != nil { return m.client.RemoveObject(m.ctx, m.bucket, m.buildMinioPath(path), minio.RemoveObjectOptions{})
return convertMinioErr(err)
}
return nil
} }
// URL gets the redirect URL to a file. The presigned link is valid for 5 minutes. // URL gets the redirect URL to a file. The presigned link is valid for 5 minutes.
@@ -204,8 +179,7 @@ func (m *MinioStorage) URL(path, name string) (*url.URL, error) {
reqParams := make(url.Values) reqParams := make(url.Values)
// TODO it may be good to embed images with 'inline' like ServeData does, but we don't want to have to read the file, do we? // TODO it may be good to embed images with 'inline' like ServeData does, but we don't want to have to read the file, do we?
reqParams.Set("response-content-disposition", "attachment; filename=\""+quoteEscaper.Replace(name)+"\"") reqParams.Set("response-content-disposition", "attachment; filename=\""+quoteEscaper.Replace(name)+"\"")
u, err := m.client.PresignedGetObject(m.ctx, m.bucket, m.buildMinioPath(path), 5*time.Minute, reqParams) return m.client.PresignedGetObject(m.ctx, m.bucket, m.buildMinioPath(path), 5*time.Minute, reqParams)
return u, convertMinioErr(err)
} }
// IterateObjects iterates across the objects in the miniostorage // IterateObjects iterates across the objects in the miniostorage
@@ -219,13 +193,13 @@ func (m *MinioStorage) IterateObjects(fn func(path string, obj Object) error) er
}) { }) {
object, err := m.client.GetObject(lobjectCtx, m.bucket, mObjInfo.Key, opts) object, err := m.client.GetObject(lobjectCtx, m.bucket, mObjInfo.Key, opts)
if err != nil { if err != nil {
return convertMinioErr(err) return err
} }
if err := func(object *minio.Object, fn func(path string, obj Object) error) error { if err := func(object *minio.Object, fn func(path string, obj Object) error) error {
defer object.Close() defer object.Close()
return fn(strings.TrimPrefix(m.basePath, mObjInfo.Key), &minioObject{object}) return fn(strings.TrimPrefix(m.basePath, mObjInfo.Key), &minioObject{object})
}(object, fn); err != nil { }(object, fn); err != nil {
return convertMinioErr(err) return err
} }
} }
return nil return nil

View File

@@ -12,7 +12,6 @@ import (
"net/url" "net/url"
"os" "os"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
) )
@@ -142,25 +141,21 @@ func NewStorage(typStr string, cfg interface{}) (ObjectStorage, error) {
} }
func initAvatars() (err error) { func initAvatars() (err error) {
log.Info("Initialising Avatar storage with type: %s", setting.Avatar.Storage.Type) Avatars, err = NewStorage(setting.Avatar.Storage.Type, setting.Avatar.Storage)
Avatars, err = NewStorage(setting.Avatar.Storage.Type, &setting.Avatar.Storage)
return return
} }
func initAttachments() (err error) { func initAttachments() (err error) {
log.Info("Initialising Attachment storage with type: %s", setting.Attachment.Storage.Type) Attachments, err = NewStorage(setting.Attachment.Storage.Type, setting.Attachment.Storage)
Attachments, err = NewStorage(setting.Attachment.Storage.Type, &setting.Attachment.Storage)
return return
} }
func initLFS() (err error) { func initLFS() (err error) {
log.Info("Initialising LFS storage with type: %s", setting.LFS.Storage.Type) LFS, err = NewStorage(setting.LFS.Storage.Type, setting.LFS.Storage)
LFS, err = NewStorage(setting.LFS.Storage.Type, &setting.LFS.Storage)
return return
} }
func initRepoAvatars() (err error) { func initRepoAvatars() (err error) {
log.Info("Initialising Repository Avatar storage with type: %s", setting.RepoAvatar.Storage.Type) RepoAvatars, err = NewStorage(setting.RepoAvatar.Storage.Type, setting.RepoAvatar.Storage)
RepoAvatars, err = NewStorage(setting.RepoAvatar.Storage.Type, &setting.RepoAvatar.Storage)
return return
} }

View File

@@ -105,7 +105,7 @@ type CreateRepoOption struct {
Description string `json:"description" binding:"MaxSize(255)"` Description string `json:"description" binding:"MaxSize(255)"`
// Whether the repository is private // Whether the repository is private
Private bool `json:"private"` Private bool `json:"private"`
// Label-Set to use // Issue Label set to use
IssueLabels string `json:"issue_labels"` IssueLabels string `json:"issue_labels"`
// Whether the repository should be auto-intialized? // Whether the repository should be auto-intialized?
AutoInit bool `json:"auto_init"` AutoInit bool `json:"auto_init"`

View File

@@ -5,7 +5,6 @@
package task package task
import ( import (
"context"
"errors" "errors"
"fmt" "fmt"
"strings" "strings"
@@ -16,13 +15,12 @@ import (
"code.gitea.io/gitea/modules/migrations" "code.gitea.io/gitea/modules/migrations"
migration "code.gitea.io/gitea/modules/migrations/base" migration "code.gitea.io/gitea/modules/migrations/base"
"code.gitea.io/gitea/modules/notification" "code.gitea.io/gitea/modules/notification"
"code.gitea.io/gitea/modules/process"
"code.gitea.io/gitea/modules/structs" "code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/timeutil" "code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/util" "code.gitea.io/gitea/modules/util"
) )
func handleCreateError(owner *models.User, err error) error { func handleCreateError(owner *models.User, err error, name string) error {
switch { switch {
case models.IsErrReachLimitOfRepo(err): case models.IsErrReachLimitOfRepo(err):
return fmt.Errorf("You have already reached your limit of %d repositories", owner.MaxCreationLimit()) return fmt.Errorf("You have already reached your limit of %d repositories", owner.MaxCreationLimit())
@@ -40,8 +38,8 @@ func handleCreateError(owner *models.User, err error) error {
func runMigrateTask(t *models.Task) (err error) { func runMigrateTask(t *models.Task) (err error) {
defer func() { defer func() {
if e := recover(); e != nil { if e := recover(); e != nil {
err = fmt.Errorf("PANIC whilst trying to do migrate task: %v", e) err = fmt.Errorf("PANIC whilst trying to do migrate task: %v\nStacktrace: %v", err, log.Stack(2))
log.Critical("PANIC during runMigrateTask[%d] by DoerID[%d] to RepoID[%d] for OwnerID[%d]: %v\nStacktrace: %v", t.ID, t.DoerID, t.RepoID, t.OwnerID, e, log.Stack(2)) log.Critical("PANIC during runMigrateTask[%d] by DoerID[%d] to RepoID[%d] for OwnerID[%d]: %v", t.ID, t.DoerID, t.RepoID, t.OwnerID, err)
} }
if err == nil { if err == nil {
@@ -57,8 +55,7 @@ func runMigrateTask(t *models.Task) (err error) {
t.EndTime = timeutil.TimeStampNow() t.EndTime = timeutil.TimeStampNow()
t.Status = structs.TaskStatusFailed t.Status = structs.TaskStatusFailed
t.Errors = err.Error() t.Errors = err.Error()
t.RepoID = 0 if err := t.UpdateCols("status", "errors", "end_time"); err != nil {
if err := t.UpdateCols("status", "errors", "repo_id", "end_time"); err != nil {
log.Error("Task UpdateCols failed: %v", err) log.Error("Task UpdateCols failed: %v", err)
} }
@@ -69,8 +66,8 @@ func runMigrateTask(t *models.Task) (err error) {
} }
}() }()
if err = t.LoadRepo(); err != nil { if err := t.LoadRepo(); err != nil {
return return err
} }
// if repository is ready, then just finsih the task // if repository is ready, then just finsih the task
@@ -78,43 +75,33 @@ func runMigrateTask(t *models.Task) (err error) {
return nil return nil
} }
if err = t.LoadDoer(); err != nil { if err := t.LoadDoer(); err != nil {
return return err
} }
if err = t.LoadOwner(); err != nil { if err := t.LoadOwner(); err != nil {
return return err
}
t.StartTime = timeutil.TimeStampNow()
t.Status = structs.TaskStatusRunning
if err := t.UpdateCols("start_time", "status"); err != nil {
return err
} }
var opts *migration.MigrateOptions var opts *migration.MigrateOptions
opts, err = t.MigrateConfig() opts, err = t.MigrateConfig()
if err != nil { if err != nil {
return return err
} }
opts.MigrateToRepoID = t.RepoID opts.MigrateToRepoID = t.RepoID
var repo *models.Repository repo, err := migrations.MigrateRepository(graceful.GetManager().HammerContext(), t.Doer, t.Owner.Name, *opts)
ctx, cancel := context.WithCancel(graceful.GetManager().ShutdownContext())
defer cancel()
pm := process.GetManager()
pid := pm.Add(fmt.Sprintf("MigrateTask: %s/%s", t.Owner.Name, opts.RepoName), cancel)
defer pm.Remove(pid)
t.StartTime = timeutil.TimeStampNow()
t.Status = structs.TaskStatusRunning
if err = t.UpdateCols("start_time", "status"); err != nil {
return
}
repo, err = migrations.MigrateRepository(ctx, t.Doer, t.Owner.Name, *opts)
if err == nil { if err == nil {
log.Trace("Repository migrated [%d]: %s/%s", repo.ID, t.Owner.Name, repo.Name) log.Trace("Repository migrated [%d]: %s/%s", repo.ID, t.Owner.Name, repo.Name)
return return nil
} }
if models.IsErrRepoAlreadyExist(err) { if models.IsErrRepoAlreadyExist(err) {
err = errors.New("The repository name is already used") return errors.New("The repository name is already used")
return
} }
// remoteAddr may contain credentials, so we sanitize it // remoteAddr may contain credentials, so we sanitize it
@@ -126,7 +113,5 @@ func runMigrateTask(t *models.Task) (err error) {
return fmt.Errorf("Migration failed: %v", err.Error()) return fmt.Errorf("Migration failed: %v", err.Error())
} }
// do not be tempted to coalesce this line with the return return handleCreateError(t.Owner, err, "MigratePost")
err = handleCreateError(t.Owner, err)
return
} }

View File

@@ -17,24 +17,11 @@ import (
type ( type (
// FeishuPayload represents // FeishuPayload represents
FeishuPayload struct { FeishuPayload struct {
MsgType string `json:"msg_type"` // text / post / image / share_chat / interactive Title string `json:"title"`
Content struct { Text string `json:"text"`
Text string `json:"text"`
} `json:"content"`
} }
) )
func newFeishuTextPayload(text string) *FeishuPayload {
return &FeishuPayload{
MsgType: "text",
Content: struct {
Text string `json:"text"`
}{
Text: text,
},
}
}
// SetSecret sets the Feishu secret // SetSecret sets the Feishu secret
func (f *FeishuPayload) SetSecret(_ string) {} func (f *FeishuPayload) SetSecret(_ string) {}
@@ -55,25 +42,34 @@ var (
func (f *FeishuPayload) Create(p *api.CreatePayload) (api.Payloader, error) { func (f *FeishuPayload) Create(p *api.CreatePayload) (api.Payloader, error) {
// created tag/branch // created tag/branch
refName := git.RefEndName(p.Ref) refName := git.RefEndName(p.Ref)
text := fmt.Sprintf("[%s] %s %s created", p.Repo.FullName, p.RefType, refName) title := fmt.Sprintf("[%s] %s %s created", p.Repo.FullName, p.RefType, refName)
return newFeishuTextPayload(text), nil return &FeishuPayload{
Text: title,
Title: title,
}, nil
} }
// Delete implements PayloadConvertor Delete method // Delete implements PayloadConvertor Delete method
func (f *FeishuPayload) Delete(p *api.DeletePayload) (api.Payloader, error) { func (f *FeishuPayload) Delete(p *api.DeletePayload) (api.Payloader, error) {
// created tag/branch // created tag/branch
refName := git.RefEndName(p.Ref) refName := git.RefEndName(p.Ref)
text := fmt.Sprintf("[%s] %s %s deleted", p.Repo.FullName, p.RefType, refName) title := fmt.Sprintf("[%s] %s %s deleted", p.Repo.FullName, p.RefType, refName)
return newFeishuTextPayload(text), nil return &FeishuPayload{
Text: title,
Title: title,
}, nil
} }
// Fork implements PayloadConvertor Fork method // Fork implements PayloadConvertor Fork method
func (f *FeishuPayload) Fork(p *api.ForkPayload) (api.Payloader, error) { func (f *FeishuPayload) Fork(p *api.ForkPayload) (api.Payloader, error) {
text := fmt.Sprintf("%s is forked to %s", p.Forkee.FullName, p.Repo.FullName) title := fmt.Sprintf("%s is forked to %s", p.Forkee.FullName, p.Repo.FullName)
return newFeishuTextPayload(text), nil return &FeishuPayload{
Text: title,
Title: title,
}, nil
} }
// Push implements PayloadConvertor Push method // Push implements PayloadConvertor Push method
@@ -83,7 +79,9 @@ func (f *FeishuPayload) Push(p *api.PushPayload) (api.Payloader, error) {
commitDesc string commitDesc string
) )
var text = fmt.Sprintf("[%s:%s] %s\n", p.Repo.FullName, branchName, commitDesc) title := fmt.Sprintf("[%s:%s] %s", p.Repo.FullName, branchName, commitDesc)
var text string
// for each commit, generate attachment text // for each commit, generate attachment text
for i, commit := range p.Commits { for i, commit := range p.Commits {
var authorName string var authorName string
@@ -98,28 +96,40 @@ func (f *FeishuPayload) Push(p *api.PushPayload) (api.Payloader, error) {
} }
} }
return newFeishuTextPayload(text), nil return &FeishuPayload{
Text: text,
Title: title,
}, nil
} }
// Issue implements PayloadConvertor Issue method // Issue implements PayloadConvertor Issue method
func (f *FeishuPayload) Issue(p *api.IssuePayload) (api.Payloader, error) { func (f *FeishuPayload) Issue(p *api.IssuePayload) (api.Payloader, error) {
text, issueTitle, attachmentText, _ := getIssuesPayloadInfo(p, noneLinkFormatter, true) text, issueTitle, attachmentText, _ := getIssuesPayloadInfo(p, noneLinkFormatter, true)
return newFeishuTextPayload(issueTitle + "\r\n" + text + "\r\n\r\n" + attachmentText), nil return &FeishuPayload{
Text: text + "\r\n\r\n" + attachmentText,
Title: issueTitle,
}, nil
} }
// IssueComment implements PayloadConvertor IssueComment method // IssueComment implements PayloadConvertor IssueComment method
func (f *FeishuPayload) IssueComment(p *api.IssueCommentPayload) (api.Payloader, error) { func (f *FeishuPayload) IssueComment(p *api.IssueCommentPayload) (api.Payloader, error) {
text, issueTitle, _ := getIssueCommentPayloadInfo(p, noneLinkFormatter, true) text, issueTitle, _ := getIssueCommentPayloadInfo(p, noneLinkFormatter, true)
return newFeishuTextPayload(issueTitle + "\r\n" + text + "\r\n\r\n" + p.Comment.Body), nil return &FeishuPayload{
Text: text + "\r\n\r\n" + p.Comment.Body,
Title: issueTitle,
}, nil
} }
// PullRequest implements PayloadConvertor PullRequest method // PullRequest implements PayloadConvertor PullRequest method
func (f *FeishuPayload) PullRequest(p *api.PullRequestPayload) (api.Payloader, error) { func (f *FeishuPayload) PullRequest(p *api.PullRequestPayload) (api.Payloader, error) {
text, issueTitle, attachmentText, _ := getPullRequestPayloadInfo(p, noneLinkFormatter, true) text, issueTitle, attachmentText, _ := getPullRequestPayloadInfo(p, noneLinkFormatter, true)
return newFeishuTextPayload(issueTitle + "\r\n" + text + "\r\n\r\n" + attachmentText), nil return &FeishuPayload{
Text: text + "\r\n\r\n" + attachmentText,
Title: issueTitle,
}, nil
} }
// Review implements PayloadConvertor Review method // Review implements PayloadConvertor Review method
@@ -137,19 +147,28 @@ func (f *FeishuPayload) Review(p *api.PullRequestPayload, event models.HookEvent
} }
return newFeishuTextPayload(title + "\r\n\r\n" + text), nil return &FeishuPayload{
Text: title + "\r\n\r\n" + text,
Title: title,
}, nil
} }
// Repository implements PayloadConvertor Repository method // Repository implements PayloadConvertor Repository method
func (f *FeishuPayload) Repository(p *api.RepositoryPayload) (api.Payloader, error) { func (f *FeishuPayload) Repository(p *api.RepositoryPayload) (api.Payloader, error) {
var text string var title string
switch p.Action { switch p.Action {
case api.HookRepoCreated: case api.HookRepoCreated:
text = fmt.Sprintf("[%s] Repository created", p.Repository.FullName) title = fmt.Sprintf("[%s] Repository created", p.Repository.FullName)
return newFeishuTextPayload(text), nil return &FeishuPayload{
Text: title,
Title: title,
}, nil
case api.HookRepoDeleted: case api.HookRepoDeleted:
text = fmt.Sprintf("[%s] Repository deleted", p.Repository.FullName) title = fmt.Sprintf("[%s] Repository deleted", p.Repository.FullName)
return newFeishuTextPayload(text), nil return &FeishuPayload{
Title: title,
Text: title,
}, nil
} }
return nil, nil return nil, nil
@@ -159,7 +178,10 @@ func (f *FeishuPayload) Repository(p *api.RepositoryPayload) (api.Payloader, err
func (f *FeishuPayload) Release(p *api.ReleasePayload) (api.Payloader, error) { func (f *FeishuPayload) Release(p *api.ReleasePayload) (api.Payloader, error) {
text, _ := getReleasePayloadInfo(p, noneLinkFormatter, true) text, _ := getReleasePayloadInfo(p, noneLinkFormatter, true)
return newFeishuTextPayload(text), nil return &FeishuPayload{
Text: text,
Title: text,
}, nil
} }
// GetFeishuPayload converts a ding talk webhook into a FeishuPayload // GetFeishuPayload converts a ding talk webhook into a FeishuPayload

View File

@@ -366,7 +366,6 @@ org_name_been_taken = The organization name is already taken.
team_name_been_taken = The team name is already taken. team_name_been_taken = The team name is already taken.
team_no_units_error = Allow access to at least one repository section. team_no_units_error = Allow access to at least one repository section.
email_been_used = The email address is already used. email_been_used = The email address is already used.
email_invalid = The email address is invalid.
openid_been_used = The OpenID address '%s' is already used. openid_been_used = The OpenID address '%s' is already used.
username_password_incorrect = Username or password is incorrect. username_password_incorrect = Username or password is incorrect.
password_complexity = Password does not pass complexity requirements: password_complexity = Password does not pass complexity requirements:
@@ -871,11 +870,9 @@ editor.file_already_exists = A file named '%s' already exists in this repository
editor.commit_empty_file_header = Commit an empty file editor.commit_empty_file_header = Commit an empty file
editor.commit_empty_file_text = The file you're about to commit is empty. Proceed? editor.commit_empty_file_text = The file you're about to commit is empty. Proceed?
editor.no_changes_to_show = There are no changes to show. editor.no_changes_to_show = There are no changes to show.
editor.fail_to_update_file = Failed to update/create file '%s'. editor.fail_to_update_file = Failed to update/create file '%s' with error: %v
editor.fail_to_update_file_summary = Error Message:
editor.push_rejected_no_message = The change was rejected by the server without a message. Please check githooks. editor.push_rejected_no_message = The change was rejected by the server without a message. Please check githooks.
editor.push_rejected = The change was rejected by the server. Please check githooks. editor.push_rejected = The change was rejected by the server with the following message:<br>%s<br> Please check githooks.
editor.push_rejected_summary = Full Rejection Message:
editor.add_subdir = Add a directory… editor.add_subdir = Add a directory…
editor.unable_to_upload_files = Failed to upload files to '%s' with error: %v editor.unable_to_upload_files = Failed to upload files to '%s' with error: %v
editor.upload_file_is_locked = File '%s' is locked by %s. editor.upload_file_is_locked = File '%s' is locked by %s.
@@ -1193,7 +1190,6 @@ issues.review.remove_review_request_self = "refused to review %s"
issues.review.pending = Pending issues.review.pending = Pending
issues.review.review = Review issues.review.review = Review
issues.review.reviewers = Reviewers issues.review.reviewers = Reviewers
issues.review.outdated = Outdated
issues.review.show_outdated = Show outdated issues.review.show_outdated = Show outdated
issues.review.hide_outdated = Hide outdated issues.review.hide_outdated = Hide outdated
issues.review.show_resolved = Show resolved issues.review.show_resolved = Show resolved
@@ -1262,15 +1258,11 @@ pulls.rebase_merge_commit_pull_request = Rebase and Merge (--no-ff)
pulls.squash_merge_pull_request = Squash and Merge pulls.squash_merge_pull_request = Squash and Merge
pulls.require_signed_wont_sign = The branch requires signed commits but this merge will not be signed pulls.require_signed_wont_sign = The branch requires signed commits but this merge will not be signed
pulls.invalid_merge_option = You cannot use this merge option for this pull request. pulls.invalid_merge_option = You cannot use this merge option for this pull request.
pulls.merge_conflict = Merge Failed: There was a conflict whilst merging. Hint: Try a different strategy pulls.merge_conflict = Merge Failed: There was a conflict whilst merging: %[1]s<br>%[2]s<br>Hint: Try a different strategy
pulls.merge_conflict_summary = Error Message pulls.rebase_conflict = Merge Failed: There was a conflict whilst rebasing commit: %[1]s<br>%[2]s<br>%[3]s<br>Hint:Try a different strategy
pulls.rebase_conflict = Merge Failed: There was a conflict whilst rebasing commit: %[1]s. Hint: Try a different strategy
pulls.rebase_conflict_summary = Error Message
; </summary><code>%[2]s<br>%[3]s</code></details>
pulls.unrelated_histories = Merge Failed: The merge head and base do not share a common history. Hint: Try a different strategy pulls.unrelated_histories = Merge Failed: The merge head and base do not share a common history. Hint: Try a different strategy
pulls.merge_out_of_date = Merge Failed: Whilst generating the merge, the base was updated. Hint: Try again. pulls.merge_out_of_date = Merge Failed: Whilst generating the merge, the base was updated. Hint: Try again.
pulls.push_rejected = Merge Failed: The push was rejected. Review the githooks for this repository. pulls.push_rejected = Merge Failed: The push was rejected with the following message:<br>%s<br>Review the githooks for this repository
pulls.push_rejected_summary = Full Rejection Message
pulls.push_rejected_no_message = Merge Failed: The push was rejected but there was no remote message.<br>Review the githooks for this repository pulls.push_rejected_no_message = Merge Failed: The push was rejected but there was no remote message.<br>Review the githooks for this repository
pulls.open_unmerged_pull_exists = `You cannot perform a reopen operation because there is a pending pull request (#%d) with identical properties.` pulls.open_unmerged_pull_exists = `You cannot perform a reopen operation because there is a pending pull request (#%d) with identical properties.`
pulls.status_checking = Some checks are pending pulls.status_checking = Some checks are pending

View File

@@ -1037,7 +1037,8 @@ issues.close_comment_issue=Commenta e Chiudi
issues.reopen_issue=Riapri issues.reopen_issue=Riapri
issues.reopen_comment_issue=Commenta e Riapri issues.reopen_comment_issue=Commenta e Riapri
issues.create_comment=Commento issues.create_comment=Commento
issues.closed_at=`chiuso questo probleam <a id="%[1]s" href="#%[1]s">%[2]s</a>` issues.closed_at="`chiuso questo probleam <a id=\"%[1]s\" href=\"#%[1]s\">%[2]s</a>`
Contextrequest"
issues.reopened_at=`riaperto questo problema <a id="%[1]s" href="#%[1]s">%[2]s</a>` issues.reopened_at=`riaperto questo problema <a id="%[1]s" href="#%[1]s">%[2]s</a>`
issues.commit_ref_at=`ha fatto riferimento a questa issue dal commit <a id="%[1]s" href="#%[1]s">%[2]s</a>` issues.commit_ref_at=`ha fatto riferimento a questa issue dal commit <a id="%[1]s" href="#%[1]s">%[2]s</a>`
issues.ref_issue_from=`<a href="%[3]s">ha fatto riferimento a questo problema %[4]s</a> <a id="%[1]s" href="#%[1]s">%[2]s</a>` issues.ref_issue_from=`<a href="%[3]s">ha fatto riferimento a questo problema %[4]s</a> <a id="%[1]s" href="#%[1]s">%[2]s</a>`

Some files were not shown because too many files have changed in this diff Show More