Compare commits

..

47 Commits

Author SHA1 Message Date
6543
8cfd6695da Changelog v1.14.3 (#16131)
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: zeripath <art27@cantab.net>
2021-06-18 17:25:20 +02:00
6543
f832e8eeea Fix some API bugs (#16184) (#16190)
* Fix some API bugs (#16184)

* Repository object only count releases as releases (fix #16144)

* EditOrg respect RepoAdminChangeTeamAccess option (fix #16013)

* adjut to v1.14
2021-06-18 19:47:34 +08:00
zeripath
544ef7d394 Encrypt migration credentials at rest (#15895) (#16187)
Backport #15895

Storing these credentials is a liability.

* Encrypt credentials with SECRET_KEY before persisting to task queue table (they need to be persisted due to the nature of the task queue)
  - security in depth: helps when attacker has access to DB only, but not app.ini
* Delete all credentials (even encrypted) from the task table, once the migration is done, for safety
  - security in depth: minimizes leaked data if attacker gains access to snapshot of both DB and app.ini
2021-06-17 22:59:28 +02:00
zeripath
5ff807acde Run processors on whole of text (#16155) (#16185)
Backport #16155

There is an inefficiency in the design of our processors which means that Emoji
and other processors run in order n^2 time.

This PR forces the processors to process the entirety of text node before passing
back up. The fundamental inefficiency remains but it should be significantly
ameliorated.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-06-17 20:01:33 +02:00
zeripath
849d316d8d issue-keyword class is being incorrectly stripped off spans (#16163) (#16172)
Backport #16163

Bluemonday sanitizer regexp rules are not additive, so the addition of the icons,
emojis and chroma syntax policy has led to this being stripped.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-06-16 06:35:54 -04:00
zeripath
946eb1321c Only check access tokens if they are likely to be tokens (#16164) (#16171)
Backprt #16164

Gitea will currently check every if every password is an access token even though
most passwords are not and cannot be access tokens.

By creation access tokens are 40 byte hexadecimal strings therefore only these should
be checked.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-06-16 05:06:27 -04:00
Andrei Yankovich
bc82bb9cda Removable media support (#16138)
Add support removable media for snap version of gitia.
for get more info about removable media interface see the snapcraft [documentation](https://snapcraft.io/docs/removable-media-interface)
2021-06-12 21:27:16 +02:00
zeripath
f034804e5d Set self-adjusting deadline for connection writing (#16068) (#16123)
In #16055 it appears that the simple 5s deadline doesn't work for large
file writes. Now we can't - or at least shouldn't just set no deadline
as go will happily let these connections block indefinitely. However,
what seems reasonable is to set some minimum rate we expect for writing.

This PR suggests the following algorithm:

* Every write has a minimum timeout of 5s (adjustable at compile time.)
* If there has been a previous write - then consider its previous
deadline, add half of the minimum timeout + 2s per kb about to written.
* If that new deadline is after the minimum timeout use that.

Fix #16055

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: 6543 <6543@obermui.de>
2021-06-11 00:26:32 +03:00
a1012112796
c1887bfc9b Fix language switch for install page (#16043) (#16128)
Signed-off-by: a1012112796 <1012112796@qq.com>
2021-06-10 21:19:40 +08:00
Lunny Xiao
41a4047e79 Fix bug on getIssueIDsByRepoID (#16119) (#16124)
* Fix bug on getIssueIDsByRepoID

* Add test
2021-06-10 06:12:18 +01:00
6543
ac84bb7183 Fix data URI scramble (#16098) (#16118)
* Fix data URI scramble (#16098)

* Removed unused method.

* No prefix for data uris.

* Added test to prevent regressions.

Co-authored-by: KN4CK3R <admin@oldschoolhack.me>
2021-06-09 16:31:40 +02:00
6543
3be67e9a2b Fix http path bug (#16117) (#16120)
* Fix http path bug

* Add missed request

* add tests

Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-06-09 15:58:00 +02:00
Lunny Xiao
ce2ade05e6 Merge all deleteBranch as one function and also fix bug when delete branch don't close related PRs (#16067) (#16097)
* Fix bug when delete branch don't close related PRs

* Merge all deletebranch as one method

Co-authored-by: Lauris BH <lauris@nix.lv>
2021-06-07 18:27:41 +02:00
6543
1e76f7b5b7 api: fix overly strict edit pr permissions (#15900) (#16081)
Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Norwin <noerw@users.noreply.github.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-06-06 12:22:05 +02:00
6543
2265058c31 git migration: don't prompt interactively for clone credentials (#15902) (#16082)
* don't prompt interactively for clone credentials

* apply GIT_TERMINAL_PROMPT=0 to all git cmds

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>

Co-authored-by: Norwin <noerw@users.noreply.github.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-06-06 14:02:34 +08:00
zeripath
ba74fdbda9 Fix case change in ownernames (#16045) (#16050)
Backport #16045

If you change the case of a username the change needs to be propagated to their
repositories.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-06-03 13:09:43 +08:00
zeripath
0600f7972a Add missing SameSite settings for the i_like_gitea cookie (#16037) (#16039)
Backport #16037

The i_like_gitea cookie appears to be missing the SameSite settings. I think they
were present at some point but may have been removed in a merge.

This PR ensures that they are set.

Fix #15972

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-05-31 21:33:22 -04:00
Jimmy Praet
8007602b40 Don't manipulate input params in email notification (#16011) (#16033)
Backport #16011
2021-05-31 02:17:34 -04:00
techknowlogick
3a79f1190f Fix setting of SameSite on cookies (#15989) (#15991)
Fix #15972

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-05-27 10:02:39 -04:00
techknowlogick
d95489b7ed follow redirect when fetching theme archive (#15986) (#15990) 2021-05-26 21:05:24 -04:00
fnetX (aka fralix)
a9e1a37b71 Remove branch URL before IssueRefURL (#15970)
Revert change for account / org dashboard where IssueRefURLs do not
contain the full repo URL (case RepoLink is not true)

Co-authored-by: Norwin <noerw@users.noreply.github.com>

Co-authored-by: Norwin <noerw@users.noreply.github.com>
2021-05-25 16:02:19 -04:00
Tomás Warynyca
5a589ef9ec fix layout of milestone view (#15940) 2021-05-22 10:38:51 +08:00
zeripath
159bc8842a Restore PAM user autocreation functionality (#15825) (#15867)
Backport #15825

* Restore PAM user autocreation functionality

PAM autoregistration of users currently fails due to email invalidity.
This PR adds a new setting to PAM to allow an email domain to be set
or just sets the email to the noreply address and if that fails falls
back to uuid@localhost

Fix #15702

Signed-off-by: Andrew Thornton <art27@cantab.net>

* As per KN4CKER

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: 6543 <6543@obermui.de>
2021-05-19 10:42:36 -04:00
Norwin
4b771d393e remove unimplemented searchbar from project view (#15905) 2021-05-17 13:22:08 +02:00
zeripath
0c2cbfcb3b Move sans-serif fallback font higher than emoji fonts (#15855) (#15892)
Backport #15855

The Tor browser does not use the system-ui font and no other fonts in the stack match
its default fonts. In fact it is possible that it will in future only
match generic fonts. This means that all rendering will first try the
emoji fonts before falling back to the sans-serif font for glyphs.

In this case has the emoji fall back fonts for Tor contains empty glyphs
for numbers - in order to protect privacy - and leads to numbers being
rendered as empty glyphs. This is clearly not ideal and whilst we could
use the Arimo font - as I state above I suspect that Tor will eventually
ban detecting this and we should instead move the sans-serif font higher
in the stack so that it matches before the emoji fonts.

Partial fix of #15844

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-05-16 16:42:12 +03:00
6543
8c4bf4c3b4 GitHub: migrate draft releases too (#15884) (#15888)
* GitHub: migrate draft releases too

* refactor
2021-05-16 09:24:28 +02:00
6543
3bcf2e5c18 Close the gitrepo when deleting the repository (#15876) (#15887)
Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2021-05-16 00:45:17 +03:00
Lunny Xiao
ad54f008ac Upgrade xorm to v1.1.0 (#15869) (#15885) 2021-05-15 20:32:17 +02:00
zeripath
c21167e3a2 Fix bound address/port for caddy's certmagic library (see #15848) (#15859) (#15878)
Co-authored-by: Blake Miner <miner.blake@gmail.com>
Co-authored-by: 6543 <6543@obermui.de>
2021-05-15 18:28:14 +01:00
Norwin
aaa539dd2d Fix blame row height alignment (#15863) (#15883)
* fix blame row alignment on firefox
* fix blame row alignment in chrome
* fix blame row alignment in safari

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2021-05-15 18:12:07 +02:00
Naohisa Murakami
e38134f707 Fix error message when saving generated LOCAL_ROOT_URL config (#15880) (#15882)
Backport of #15880.
2021-05-15 15:06:39 +01:00
zeripath
fa96ddb327 Only write config in environment-to-ini if there are changes (#15861) (#15868)
Backport #15861

* Only write config in environment-to-ini if there are changes

Only write the new config in environment-to-ini if there are changes or the
destination is not the same as the customconf.

Fix #15719
Fix #15857

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>

Co-authored-by: 6543 <6543@obermui.de>
2021-05-15 13:07:16 +01:00
zeripath
a3e8450fd5 Return go-get info on subdirs (#15642) (#15871)
Backport #15642

This PR is an alternative to #15628 and makes the go get handler a
handler.

Fix #15625

Close #15628

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: 6543 <6543@obermui.de>
2021-05-15 12:06:02 +01:00
zeripath
41422f0df0 Add timeout to writing to responses (#15831) (#15872)
Backport #15831

In #15826 it has become apparent that there are a few occasions when a response can
hang during writing, and because there is no timeout go will happily just block
interminably. This PR adds a fixed 5 second timeout to all writes to a connection.

Fix #15826

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-05-14 17:42:27 +01:00
KN4CK3R
f773733252 Fix LFS commit finder not working (#15856) (#15874)
* Create a copy of the sha bytes.

Co-authored-by: Andrew Thornton <art27@cantab.net>
2021-05-14 16:39:59 +01:00
zeripath
cbaf8e8785 Stop calling WriteHeader in Write (#15862) (#15873)
Backport #15862

Fixes http: superfluous response.WriteHeader call from code.gitea.io/gitea/modules/context.(*Response).WriteHeader (response.go:67)

* Looking again we don't need this writeHeader as all of our downstream
implementations will always do it for us

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-05-14 15:38:35 +01:00
zeripath
1bf46836da Only offer hostcertificates if they exist (#15849) (#15853)
Backport #15849

A common bug report is the otherwise harmless sshd logging:

```
Could not load host certificate "/data/ssh/ssh_host_ed25519_cert": No such file or directory
```

This PR simply checks if these files exist before creation of sshd_config and if
they do not exist, doesn't add a reference to them.

Fix #14110 amongst others.

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Lauris BH <lauris@nix.lv>

Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: 6543 <6543@obermui.de>
2021-05-13 11:12:41 -04:00
zeripath
387a1bc472 fix truncate utf8 string (#15828) (#15854)
Backport #15828

* fix truncate utf8 string.

* revoke truncated user info.

Co-authored-by: yan <sxty32@gmail.com>
2021-05-13 16:10:29 +02:00
zeripath
62daf84596 Fix bound address/port for caddy's certmagic library (#15758) (#15848)
Backport #15758

* Fix bound address/port for caddy's certmagic library

* Fix bug

Co-authored-by: zeripath <art27@cantab.net>

Co-authored-by: Blake Miner <miner.blake@gmail.com>
2021-05-12 23:36:46 +01:00
techknowlogick
39d209dccc change s3 bucket name (#15847) 2021-05-12 16:12:36 -04:00
zeripath
c88392e772 Upgrade unrolled/render to v1.1.1 (#15845) (#15846)
Backport #15845

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-05-12 21:54:50 +02:00
zeripath
a83cde2f3f Tagger can be empty, as can Commit and Author - tolerate this (#15835) (#15839)
Backport #15835

Unfortunately some old repositories can have tags with empty Tagger, Commit
or Author. Go-Git variants will always have empty values for these whereas
the native git variant leaves them at nil. The simplest solution is just to
always have these set to empty Signatures.

v156 migration also makes the incorrect assumption that these cannot be empty.
Therefore add some handling to this and add logging and adjust broken
logging elsewhere in this migration.

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: 6543 <6543@obermui.de>
2021-05-12 20:09:16 +01:00
zeripath
332eb2f6d2 Queue manager FlushAll can loop rapidly - add delay (#15733) (#15840)
Backport #15733

* Queue manager FlushAll can loop rapidly - add delay

Add delay within FlushAll to prevent rapid loop when workers are busy

Signed-off-by: Andrew Thornton <art27@cantab.net>

* as per lunny

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-05-12 18:48:11 +01:00
zeripath
3ae1d7a59f Set autocomplete off on branches selector (#15809) (#15833)
Backport #15809

Fix #15782

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-05-11 23:18:07 +01:00
John Olheiser
d054c4e7f3 Add err to log (#15813) (#15824)
Signed-off-by: jolheiser <john.olheiser@gmail.com>
2021-05-10 16:38:37 -04:00
Lunny Xiao
5e562e9b30 Move restore repo to internal router and invoke from command to avoid open the same db file or queues files (#15790) (#15816)
* Move restore repo to internal router and invoke from command to avoid open the same db file or queues files

* Follow @zeripath's review

* set no timeout for resotre repo private request

* make restore repo cancelable
2021-05-10 21:14:59 +08:00
6543
c57e908f36 Tests should use test files (#15801) (#15806) 2021-05-10 01:39:14 +08:00
139 changed files with 2803 additions and 1232 deletions

View File

@@ -522,7 +522,7 @@ steps:
image: plugins/s3:1
settings:
acl: public-read
bucket: releases
bucket: gitea-artifacts
endpoint: https://storage.gitea.io
path_style: true
source: "dist/release/*"
@@ -543,7 +543,7 @@ steps:
image: plugins/s3:1
settings:
acl: public-read
bucket: releases
bucket: gitea-artifacts
endpoint: https://storage.gitea.io
path_style: true
source: "dist/release/*"
@@ -618,7 +618,7 @@ steps:
image: plugins/s3:1
settings:
acl: public-read
bucket: releases
bucket: gitea-artifacts
endpoint: https://storage.gitea.io
path_style: true
source: "dist/release/*"

View File

@@ -4,6 +4,56 @@ This changelog goes through all the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.io).
## [1.14.3](https://github.com/go-gitea/gitea/releases/tag/v1.14.3) - 2021-06-10
* SECURITY
* Encrypt migration credentials at rest (#15895) (#16187)
* Only check access tokens if they are likely to be tokens (#16164) (#16171)
* Add missing SameSite settings for the i_like_gitea cookie (#16037) (#16039)
* Fix setting of SameSite on cookies (#15989) (#15991)
* API
* Repository object only count releases as releases (#16184) (#16190)
* EditOrg respect RepoAdminChangeTeamAccess option (#16184) (#16190)
* Fix overly strict edit pr permissions (#15900) (#16081)
* BUGFIXES
* Run processors on whole of text (#16155) (#16185)
* Class `issue-keyword` is being incorrectly stripped off spans (#16163) (#16172)
* Fix language switch for install page (#16043) (#16128)
* Fix bug on getIssueIDsByRepoID (#16119) (#16124)
* Set self-adjusting deadline for connection writing (#16068) (#16123)
* Fix http path bug (#16117) (#16120)
* Fix data URI scramble (#16098) (#16118)
* Merge all deleteBranch as one function and also fix bug when delete branch don't close related PRs (#16067) (#16097)
* git migration: don't prompt interactively for clone credentials (#15902) (#16082)
* Fix case change in ownernames (#16045) (#16050)
* Don't manipulate input params in email notification (#16011) (#16033)
* Remove branch URL before IssueRefURL (#15968) (#15970)
* Fix layout of milestone view (#15927) (#15940)
* GitHub Migration, migrate draft releases too (#15884) (#15888)
* Close the gitrepo when deleting the repository (#15876) (#15887)
* Upgrade xorm to v1.1.0 (#15869) (#15885)
* Fix blame row height alignment (#15863) (#15883)
* Fix error message when saving generated LOCAL_ROOT_URL config (#15880) (#15882)
* Backport Fix LFS commit finder not working (#15856) (#15874)
* Stop calling WriteHeader in Write (#15862) (#15873)
* Add timeout to writing to responses (#15831) (#15872)
* Return go-get info on subdirs (#15642) (#15871)
* Restore PAM user autocreation functionality (#15825) (#15867)
* Fix truncate utf8 string (#15828) (#15854)
* Fix bound address/port for caddy's certmagic library (#15758) (#15848)
* Upgrade unrolled/render to v1.1.1 (#15845) (#15846)
* Queue manager FlushAll can loop rapidly - add delay (#15733) (#15840)
* Tagger can be empty, as can Commit and Author - tolerate this (#15835) (#15839)
* Set autocomplete off on branches selector (#15809) (#15833)
* Add missing error to Doctor log (#15813) (#15824)
* Move restore repo to internal router and invoke from command to avoid open the same db file or queues files (#15790) (#15816)
* ENHANCEMENTS
* Removable media support to snap package (#16136) (#16138)
* Move sans-serif fallback font higher than emoji fonts (#15855) (#15892)
* DOCKER
* Only write config in environment-to-ini if there are changes (#15861) (#15868)
* Only offer hostcertificates if they exist (#15849) (#15853)
## [1.14.2](https://github.com/go-gitea/gitea/releases/tag/v1.14.2) - 2021-05-08
* API

View File

@@ -5,15 +5,12 @@
package cmd
import (
"context"
"strings"
"errors"
"net/http"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/migrations"
"code.gitea.io/gitea/modules/migrations/base"
"code.gitea.io/gitea/modules/private"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/storage"
pull_service "code.gitea.io/gitea/services/pull"
"github.com/urfave/cli"
)
@@ -50,70 +47,18 @@ wiki, issues, labels, releases, release_assets, milestones, pull_requests, comme
}
func runRestoreRepository(ctx *cli.Context) error {
if err := initDB(); err != nil {
return err
}
setting.NewContext()
log.Trace("AppPath: %s", setting.AppPath)
log.Trace("AppWorkPath: %s", setting.AppWorkPath)
log.Trace("Custom path: %s", setting.CustomPath)
log.Trace("Log path: %s", setting.LogRootPath)
setting.InitDBConfig()
if err := storage.Init(); err != nil {
return err
}
if err := pull_service.Init(); err != nil {
return err
}
var opts = base.MigrateOptions{
RepoName: ctx.String("repo_name"),
}
if len(ctx.String("units")) == 0 {
opts.Wiki = true
opts.Issues = true
opts.Milestones = true
opts.Labels = true
opts.Releases = true
opts.Comments = true
opts.PullRequests = true
opts.ReleaseAssets = true
} else {
units := strings.Split(ctx.String("units"), ",")
for _, unit := range units {
switch strings.ToLower(unit) {
case "wiki":
opts.Wiki = true
case "issues":
opts.Issues = true
case "milestones":
opts.Milestones = true
case "labels":
opts.Labels = true
case "releases":
opts.Releases = true
case "release_assets":
opts.ReleaseAssets = true
case "comments":
opts.Comments = true
case "pull_requests":
opts.PullRequests = true
}
}
}
if err := migrations.RestoreRepository(
context.Background(),
statusCode, errStr := private.RestoreRepo(
ctx.String("repo_dir"),
ctx.String("owner_name"),
ctx.String("repo_name"),
); err != nil {
log.Fatal("Failed to restore repository: %v", err)
return err
ctx.StringSlice("units"),
)
if statusCode == http.StatusOK {
return nil
}
return nil
log.Fatal("Failed to restore repository: %v", errStr)
return errors.New(errStr)
}

View File

@@ -175,7 +175,7 @@ func setPort(port string) error {
cfg.Section("server").Key("LOCAL_ROOT_URL").SetValue(defaultLocalURL)
if err := cfg.SaveTo(setting.CustomConf); err != nil {
return fmt.Errorf("Error saving generated JWT Secret to custom config: %v", err)
return fmt.Errorf("Error saving generated LOCAL_ROOT_URL to custom config: %v", err)
}
}
return nil

View File

@@ -6,6 +6,7 @@ package cmd
import (
"net/http"
"strconv"
"strings"
"code.gitea.io/gitea/modules/log"
@@ -22,6 +23,15 @@ func runLetsEncrypt(listenAddr, domain, directory, email string, m http.Handler)
// TODO: these are placeholders until we add options for each in settings with appropriate warning
enableHTTPChallenge := true
enableTLSALPNChallenge := true
altHTTPPort := 0
altTLSALPNPort := 0
if p, err := strconv.Atoi(setting.PortToRedirect); err == nil {
altHTTPPort = p
}
if p, err := strconv.Atoi(setting.HTTPPort); err == nil {
altTLSALPNPort = p
}
magic := certmagic.NewDefault()
magic.Storage = &certmagic.FileStorage{Path: directory}
@@ -30,6 +40,9 @@ func runLetsEncrypt(listenAddr, domain, directory, email string, m http.Handler)
Agreed: setting.LetsEncryptTOS,
DisableHTTPChallenge: !enableHTTPChallenge,
DisableTLSALPNChallenge: !enableTLSALPNChallenge,
ListenHost: setting.HTTPAddr,
AltTLSALPNPort: altTLSALPNPort,
AltHTTPPort: altHTTPPort,
})
magic.Issuer = myACME

View File

@@ -110,6 +110,8 @@ func runEnvironmentToIni(c *cli.Context) error {
}
cfg.NameMapper = ini.SnackCase
changed := false
prefix := c.String("prefix") + "__"
for _, kv := range os.Environ() {
@@ -143,16 +145,22 @@ func runEnvironmentToIni(c *cli.Context) error {
continue
}
}
oldValue := key.Value()
if !changed && oldValue != value {
changed = true
}
key.SetValue(value)
}
destination := c.String("out")
if len(destination) == 0 {
destination = setting.CustomConf
}
if destination != setting.CustomConf || changed {
err = cfg.SaveTo(destination)
if err != nil {
return err
}
}
if c.Bool("clear") {
for _, kv := range os.Environ() {
idx := strings.IndexByte(kv, '=')

View File

@@ -281,6 +281,10 @@ HTTP_PORT = 3000
; PORT_TO_REDIRECT.
REDIRECT_OTHER_PORT = false
PORT_TO_REDIRECT = 80
; Timeout for any write to the connection. (Set to 0 to disable all timeouts.)
PER_WRITE_TIMEOUT = 30s
; Timeout per Kb written to connections.
PER_WRITE_PER_KB_TIMEOUT = 30s
; Permission for unix socket
UNIX_SOCKET_PERMISSION = 666
; Local (DMZ) URL for Gitea workers (such as SSH update) accessing web service.

View File

@@ -24,9 +24,29 @@ if [ ! -f /data/ssh/ssh_host_ecdsa_key ]; then
ssh-keygen -t ecdsa -b 256 -f /data/ssh/ssh_host_ecdsa_key -N "" > /dev/null
fi
if [ -e /data/ssh/ssh_host_ed25519_cert ]; then
SSH_ED25519_CERT=${SSH_ED25519_CERT:-"/data/ssh/ssh_host_ed25519_cert"}
fi
if [ -e /data/ssh/ssh_host_rsa_cert ]; then
SSH_RSA_CERT=${SSH_RSA_CERT:-"/data/ssh/ssh_host_rsa_cert"}
fi
if [ -e /data/ssh/ssh_host_ecdsa_cert ]; then
SSH_ECDSA_CERT=${SSH_ECDSA_CERT:-"/data/ssh/ssh_host_ecdsa_cert"}
fi
if [ -e /data/ssh/ssh_host_dsa_cert ]; then
SSH_DSA_CERT=${SSH_DSA_CERT:-"/data/ssh/ssh_host_dsa_cert"}
fi
if [ -d /etc/ssh ]; then
SSH_PORT=${SSH_PORT:-"22"} \
SSH_LISTEN_PORT=${SSH_LISTEN_PORT:-"${SSH_PORT}"} \
SSH_ED25519_CERT="${SSH_ED25519_CERT:+"HostCertificate "}${SSH_ED25519_CERT}" \
SSH_RSA_CERT="${SSH_RSA_CERT:+"HostCertificate "}${SSH_RSA_CERT}" \
SSH_ECDSA_CERT="${SSH_ECDSA_CERT:+"HostCertificate "}${SSH_ECDSA_CERT}" \
SSH_DSA_CERT="${SSH_DSA_CERT:+"HostCertificate "}${SSH_DSA_CERT}" \
envsubst < /etc/templates/sshd_config > /etc/ssh/sshd_config
chmod 0644 /etc/ssh/sshd_config

View File

@@ -8,13 +8,13 @@ ListenAddress ::
LogLevel INFO
HostKey /data/ssh/ssh_host_ed25519_key
HostCertificate /data/ssh/ssh_host_ed25519_cert
${SSH_ED25519_CERT}
HostKey /data/ssh/ssh_host_rsa_key
HostCertificate /data/ssh/ssh_host_rsa_cert
${SSH_RSA_CERT}
HostKey /data/ssh/ssh_host_ecdsa_key
HostCertificate /data/ssh/ssh_host_ecdsa_cert
${SSH_ECDSA_CERT}
HostKey /data/ssh/ssh_host_dsa_key
HostCertificate /data/ssh/ssh_host_dsa_cert
${SSH_DSA_CERT}
AuthorizedKeysFile .ssh/authorized_keys
AuthorizedPrincipalsFile .ssh/authorized_principals

View File

@@ -31,4 +31,4 @@ update: $(THEME)
$(THEME): $(THEME)/theme.toml
$(THEME)/theme.toml:
mkdir -p $$(dirname $@)
curl -s $(ARCHIVE) | tar xz -C $$(dirname $@)
curl -L -s $(ARCHIVE) | tar xz -C $$(dirname $@)

View File

@@ -237,6 +237,9 @@ Values containing `#` or `;` must be quoted using `` ` `` or `"""`.
most cases you do not need to change the default value. Alter it only if
your SSH server node is not the same as HTTP node. Do not set this variable
if `PROTOCOL` is set to `unix`.
- `PER_WRITE_TIMEOUT`: **30s**: Timeout for any write to the connection. (Set to 0 to
disable all timeouts.)
- `PER_WRITE_PER_KB_TIMEOUT`: **10s**: Timeout per Kb written to connections.
- `DISABLE_SSH`: **false**: Disable SSH feature when it's not available.
- `START_SSH_SERVER`: **false**: When enabled, use the built-in SSH server.
@@ -260,6 +263,9 @@ Values containing `#` or `;` must be quoted using `` ` `` or `"""`.
- `SSH_KEY_TEST_PATH`: **/tmp**: Directory to create temporary files in when testing public keys using ssh-keygen, default is the system temporary directory.
- `SSH_KEYGEN_PATH`: **ssh-keygen**: Path to ssh-keygen, default is 'ssh-keygen' which means the shell is responsible for finding out which one to call.
- `SSH_EXPOSE_ANONYMOUS`: **false**: Enable exposure of SSH clone URL to anonymous visitors, default is false.
- `SSH_PER_WRITE_TIMEOUT`: **30s**: Timeout for any write to the SSH connections. (Set to
0 to disable all timeouts.)
- `SSH_PER_WRITE_PER_KB_TIMEOUT`: **10s**: Timeout per Kb written to SSH connections.
- `MINIMUM_KEY_SIZE_CHECK`: **true**: Indicate whether to check minimum key size with corresponding type.
- `OFFLINE_MODE`: **false**: Disables use of CDN for static files and Gravatar for profile pictures.

4
go.mod
View File

@@ -122,7 +122,7 @@ require (
github.com/unknwon/com v1.0.1
github.com/unknwon/i18n v0.0.0-20200823051745-09abd91c7f2c
github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae
github.com/unrolled/render v1.1.0
github.com/unrolled/render v1.1.1
github.com/urfave/cli v1.22.5
github.com/willf/bitset v1.1.11 // indirect
github.com/xanzy/go-gitlab v0.44.0
@@ -149,7 +149,7 @@ require (
mvdan.cc/xurls/v2 v2.2.0
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251
xorm.io/builder v0.3.9
xorm.io/xorm v1.0.7
xorm.io/xorm v1.1.0
)
replace github.com/hashicorp/go-version => github.com/6543/go-version v1.2.4

41
go.sum
View File

@@ -996,6 +996,8 @@ github.com/quasoft/websspi v1.0.0 h1:5nDgdM5xSur9s+B5w2xQ5kxf5nUGqgFgU4W0aDLZ8Mw
github.com/quasoft/websspi v1.0.0/go.mod h1:HmVdl939dQ0WIXZhyik+ARdI03M6bQzaSEKcgpFmewk=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rcrowley/go-metrics v0.0.0-20190826022208-cac0b30c2563/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0 h1:OdAsTTz6OkFY5QxjkYwrChwuRruF69c169dPK26NUlk=
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rivo/uniseg v0.1.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
@@ -1113,10 +1115,8 @@ github.com/unknwon/i18n v0.0.0-20200823051745-09abd91c7f2c h1:679/gJXwrsHC3RATr0
github.com/unknwon/i18n v0.0.0-20200823051745-09abd91c7f2c/go.mod h1:+5rDk6sDGpl3azws3O+f+GpFSyN9GVr0K8cvQLQM2ZQ=
github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae h1:ihaXiJkaca54IaCSnEXtE/uSZOmPxKZhDfVLrzZLFDs=
github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae/go.mod h1:1fdkY6xxl6ExVs2QFv7R0F5IRZHKA8RahhB9fMC9RvM=
github.com/unrolled/render v1.0.3 h1:baO+NG1bZSF2WR4zwh+0bMWauWky7DVrTOfvE2w+aFo=
github.com/unrolled/render v1.0.3/go.mod h1:gN9T0NhL4Bfbwu8ann7Ry/TGHYfosul+J0obPf6NBdM=
github.com/unrolled/render v1.1.0 h1:gvpR9hHxTt6DcGqRYuVVFcfd8rtK+nyEPUJN06KB57Q=
github.com/unrolled/render v1.1.0/go.mod h1:gN9T0NhL4Bfbwu8ann7Ry/TGHYfosul+J0obPf6NBdM=
github.com/unrolled/render v1.1.1 h1:FpzNzkvlJQIlVdVaqeVBGWiCS8gpbmjtrKpDmCn6p64=
github.com/unrolled/render v1.1.1/go.mod h1:gN9T0NhL4Bfbwu8ann7Ry/TGHYfosul+J0obPf6NBdM=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli v1.22.5 h1:lNq9sAHXK2qfdI8W+GRItjCEkI+2oR4d+MEHy1CKXoU=
@@ -1502,6 +1502,7 @@ golang.org/x/tools v0.0.0-20200928182047-19e03678916f/go.mod h1:z6u4i615ZeAfBE4X
golang.org/x/tools v0.0.0-20200929161345-d7fc70abf50f/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
golang.org/x/tools v0.0.0-20201022035929-9cf592e881e9/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201124115921-2c860bdd6e78/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201125231158-b5590deeca9b/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
@@ -1668,6 +1669,33 @@ honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4 h1:UoveltGrhghAA7ePc+e+QYDHXrBps2PqFZiHkGR/xK8=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
modernc.org/cc/v3 v3.31.5-0.20210308123301-7a3e9dab9009 h1:u0oCo5b9wyLr++HF3AN9JicGhkUxJhMz51+8TIZH9N0=
modernc.org/cc/v3 v3.31.5-0.20210308123301-7a3e9dab9009/go.mod h1:0R6jl1aZlIl2avnYfbfHBS1QB6/f+16mihBObaBC878=
modernc.org/ccgo/v3 v3.9.0 h1:JbcEIqjw4Agf+0g3Tc85YvfYqkkFOv6xBwS4zkfqSoA=
modernc.org/ccgo/v3 v3.9.0/go.mod h1:nQbgkn8mwzPdp4mm6BT6+p85ugQ7FrGgIcYaE7nSrpY=
modernc.org/httpfs v1.0.6 h1:AAgIpFZRXuYnkjftxTAZwMIiwEqAfk8aVB2/oA6nAeM=
modernc.org/httpfs v1.0.6/go.mod h1:7dosgurJGp0sPaRanU53W4xZYKh14wfzX420oZADeHM=
modernc.org/libc v1.7.13-0.20210308123627-12f642a52bb8/go.mod h1:U1eq8YWr/Kc1RWCMFUWEdkTg8OTcfLw2kY8EDwl039w=
modernc.org/libc v1.8.0 h1:Pp4uv9g0csgBMpGPABKtkieF6O5MGhfGo6ZiOdlYfR8=
modernc.org/libc v1.8.0/go.mod h1:U1eq8YWr/Kc1RWCMFUWEdkTg8OTcfLw2kY8EDwl039w=
modernc.org/mathutil v1.1.1/go.mod h1:mZW8CKdRPY1v87qxC/wUdX5O1qDzXMP5TH3wjfpga6E=
modernc.org/mathutil v1.2.2 h1:+yFk8hBprV+4c0U9GjFtL+dV3N8hOJ8JCituQcMShFY=
modernc.org/mathutil v1.2.2/go.mod h1:mZW8CKdRPY1v87qxC/wUdX5O1qDzXMP5TH3wjfpga6E=
modernc.org/memory v1.0.4 h1:utMBrFcpnQDdNsmM6asmyH/FM9TqLPS7XF7otpJmrwM=
modernc.org/memory v1.0.4/go.mod h1:nV2OApxradM3/OVbs2/0OsP6nPfakXpi50C7dcoHXlc=
modernc.org/opt v0.1.1 h1:/0RX92k9vwVeDXj+Xn23DKp2VJubL7k8qNffND6qn3A=
modernc.org/opt v0.1.1/go.mod h1:WdSiB5evDcignE70guQKxYUl14mgWtbClRi5wmkkTX0=
modernc.org/sqlite v1.10.1-0.20210314190707-798bbeb9bb84 h1:rgEUzE849tFlHSoeCrKyS9cZAljC+DY7MdMHKq6R6sY=
modernc.org/sqlite v1.10.1-0.20210314190707-798bbeb9bb84/go.mod h1:PGzq6qlhyYjL6uVbSgS6WoF7ZopTW/sI7+7p+mb4ZVU=
modernc.org/strutil v1.1.0 h1:+1/yCzZxY2pZwwrsbH+4T7BQMoLQ9QiBshRC9eicYsc=
modernc.org/strutil v1.1.0/go.mod h1:lstksw84oURvj9y3tn8lGvRxyRC1S2+g5uuIzNfIOBs=
modernc.org/tcl v1.5.0 h1:euZSUNfE0Fd4W8VqXI1Ly1v7fqDJoBuAV88Ea+SnaSs=
modernc.org/tcl v1.5.0/go.mod h1:gb57hj4pO8fRrK54zveIfFXBaMHK3SKJNWcmRw1cRzc=
modernc.org/token v1.0.0 h1:a0jaWiNMDhDUtqOj09wvjWWAqd3q7WpBulmL9H2egsk=
modernc.org/token v1.0.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
modernc.org/z v1.0.1-0.20210308123920-1f282aa71362/go.mod h1:8/SRk5C/HgiQWCgXdfpb+1RvhORdkz5sw72d3jjtyqA=
modernc.org/z v1.0.1 h1:WyIDpEpAIx4Hel6q/Pcgj/VhaQV5XPJ2I6ryIYbjnpc=
modernc.org/z v1.0.1/go.mod h1:8/SRk5C/HgiQWCgXdfpb+1RvhORdkz5sw72d3jjtyqA=
mvdan.cc/xurls/v2 v2.2.0 h1:NSZPykBXJFCetGZykLAxaL6SIpvbVy/UFEniIfHAa8A=
mvdan.cc/xurls/v2 v2.2.0/go.mod h1:EV1RMtya9D6G5DMYPGD8zTQzaHet6Jh8gFlRgGRJeO8=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
@@ -1678,8 +1706,9 @@ sourcegraph.com/sourcegraph/appdash v0.0.0-20190731080439-ebfcffb1b5c0/go.mod h1
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251 h1:mUcz5b3FJbP5Cvdq7Khzn6J9OCUQJaBwgBkCR+MOwSs=
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251/go.mod h1:FJGmPh3vz9jSos1L/F91iAgnC/aejc0wIIrF2ZwJxdY=
xorm.io/builder v0.3.7/go.mod h1:aUW0S9eb9VCaPohFCH3j7czOx1PMW3i1HrSzbLYGBSE=
xorm.io/builder v0.3.8/go.mod h1:aUW0S9eb9VCaPohFCH3j7czOx1PMW3i1HrSzbLYGBSE=
xorm.io/builder v0.3.9 h1:Sd65/LdWyO7LR8+Cbd+e7mm3sK/7U9k0jS3999IDHMc=
xorm.io/builder v0.3.9/go.mod h1:aUW0S9eb9VCaPohFCH3j7czOx1PMW3i1HrSzbLYGBSE=
xorm.io/xorm v1.0.6/go.mod h1:uF9EtbhODq5kNWxMbnBEj8hRRZnlcNSz2t2N7HW/+A4=
xorm.io/xorm v1.0.7 h1:26yBTDVI+CfQpVz2Y88fISh+aiJXIPP4eNoTJlwzsC4=
xorm.io/xorm v1.0.7/go.mod h1:uF9EtbhODq5kNWxMbnBEj8hRRZnlcNSz2t2N7HW/+A4=
xorm.io/xorm v1.1.0 h1:mkEsQXLauZajiOld2cB2PkFcUZKePepPgs1bC1dw8RA=
xorm.io/xorm v1.1.0/go.mod h1:EDzNHMuCVZNszkIRSLL2nI0zX+nQE8RstAVranlSfqI=

View File

@@ -223,7 +223,7 @@ func TestAPIViewRepo(t *testing.T) {
DecodeJSON(t, resp, &repo)
assert.EqualValues(t, 1, repo.ID)
assert.EqualValues(t, "repo1", repo.Name)
assert.EqualValues(t, 2, repo.Releases)
assert.EqualValues(t, 1, repo.Releases)
assert.EqualValues(t, 1, repo.OpenIssues)
assert.EqualValues(t, 3, repo.OpenPulls)

View File

@@ -0,0 +1,69 @@
// Copyright 2021 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package integrations
import (
"io/ioutil"
"net/http"
"net/url"
"testing"
"github.com/stretchr/testify/assert"
)
func TestGitSmartHTTP(t *testing.T) {
onGiteaRun(t, testGitSmartHTTP)
}
func testGitSmartHTTP(t *testing.T, u *url.URL) {
var kases = []struct {
p string
code int
}{
{
p: "user2/repo1/info/refs",
code: 200,
},
{
p: "user2/repo1/HEAD",
code: 200,
},
{
p: "user2/repo1/objects/info/alternates",
code: 404,
},
{
p: "user2/repo1/objects/info/http-alternates",
code: 404,
},
{
p: "user2/repo1/../../custom/conf/app.ini",
code: 404,
},
{
p: "user2/repo1/objects/info/../../../../custom/conf/app.ini",
code: 404,
},
{
p: `user2/repo1/objects/info/..\..\..\..\custom\conf\app.ini`,
code: 400,
},
}
for _, kase := range kases {
t.Run(kase.p, func(t *testing.T) {
p := u.String() + kase.p
req, err := http.NewRequest("GET", p, nil)
assert.NoError(t, err)
req.SetBasicAuth("user2", userPassword)
resp, err := http.DefaultClient.Do(req)
assert.NoError(t, err)
defer resp.Body.Close()
assert.EqualValues(t, kase.code, resp.StatusCode)
_, err = ioutil.ReadAll(resp.Body)
assert.NoError(t, err)
})
}
}

View File

@@ -0,0 +1,35 @@
// Copyright 2021 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package integrations
import (
"fmt"
"net/http"
"testing"
"code.gitea.io/gitea/modules/setting"
"github.com/stretchr/testify/assert"
)
func TestGoGet(t *testing.T) {
defer prepareTestEnv(t)()
req := NewRequest(t, "GET", "/blah/glah/plah?go-get=1")
resp := MakeRequest(t, req, http.StatusOK)
expected := fmt.Sprintf(`<!doctype html>
<html>
<head>
<meta name="go-import" content="%[1]s:%[2]s/blah/glah git %[3]sblah/glah.git">
<meta name="go-source" content="%[1]s:%[2]s/blah/glah _ %[3]sblah/glah/src/branch/master{/dir} %[3]sblah/glah/src/branch/master{/dir}/{file}#L{line}">
</head>
<body>
go get --insecure %[1]s:%[2]s/blah/glah
</body>
</html>
`, setting.Domain, setting.HTTPPort, setting.AppURL)
assert.Equal(t, expected, resp.Body.String())
}

View File

@@ -1086,7 +1086,7 @@ func getIssuesByIDs(e Engine, issueIDs []int64) ([]*Issue, error) {
func getIssueIDsByRepoID(e Engine, repoID int64) ([]int64, error) {
ids := make([]int64, 0, 10)
err := e.Table("issue").Where("repo_id = ?", repoID).Find(&ids)
err := e.Table("issue").Cols("id").Where("repo_id = ?", repoID).Find(&ids)
return ids, err
}

View File

@@ -36,6 +36,14 @@ func TestIssue_ReplaceLabels(t *testing.T) {
testSuccess(1, []int64{})
}
func Test_GetIssueIDsByRepoID(t *testing.T) {
assert.NoError(t, PrepareTestDatabase())
ids, err := GetIssueIDsByRepoID(1)
assert.NoError(t, err)
assert.Len(t, ids, 5)
}
func TestIssueAPIURL(t *testing.T) {
assert.NoError(t, PrepareTestDatabase())
issue := AssertExistsAndLoadBean(t, &Issue{ID: 1}).(*Issue)

View File

@@ -21,6 +21,7 @@ import (
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/util"
gouuid "github.com/google/uuid"
jsoniter "github.com/json-iterator/go"
"xorm.io/xorm"
@@ -116,6 +117,7 @@ func (cfg *SMTPConfig) ToDB() ([]byte, error) {
// PAMConfig holds configuration for the PAM login source.
type PAMConfig struct {
ServiceName string // pam service (e.g. system-auth)
EmailDomain string
}
// FromDB fills up a PAMConfig from serialized format.
@@ -696,15 +698,26 @@ func LoginViaPAM(user *User, login, password string, sourceID int64, cfg *PAMCon
// Allow PAM sources with `@` in their name, like from Active Directory
username := pamLogin
email := pamLogin
idx := strings.Index(pamLogin, "@")
if idx > -1 {
username = pamLogin[:idx]
}
if ValidateEmail(email) != nil {
if cfg.EmailDomain != "" {
email = fmt.Sprintf("%s@%s", username, cfg.EmailDomain)
} else {
email = fmt.Sprintf("%s@%s", username, setting.Service.NoReplyAddress)
}
if ValidateEmail(email) != nil {
email = gouuid.New().String() + "@localhost"
}
}
user = &User{
LowerName: strings.ToLower(username),
Name: username,
Email: pamLogin,
Email: email,
Passwd: password,
LoginType: LoginPAM,
LoginSource: sourceID,

View File

@@ -88,7 +88,7 @@ func fixPublisherIDforTagReleases(x *xorm.Engine) error {
repo = new(Repository)
has, err := sess.ID(release.RepoID).Get(repo)
if err != nil {
log.Error("Error whilst loading repository[%d] for release[%d] with tag name %s", release.RepoID, release.ID, release.TagName)
log.Error("Error whilst loading repository[%d] for release[%d] with tag name %s. Error: %v", release.RepoID, release.ID, release.TagName, err)
return err
} else if !has {
log.Warn("Release[%d] is orphaned and refers to non-existing repository %d", release.ID, release.RepoID)
@@ -105,13 +105,13 @@ func fixPublisherIDforTagReleases(x *xorm.Engine) error {
}
if _, err := sess.ID(release.RepoID).Get(repo); err != nil {
log.Error("Error whilst loading repository[%d] for release[%d] with tag name %s", release.RepoID, release.ID, release.TagName)
log.Error("Error whilst loading repository[%d] for release[%d] with tag name %s. Error: %v", release.RepoID, release.ID, release.TagName, err)
return err
}
}
gitRepo, err = git.OpenRepository(repoPath(repo.OwnerName, repo.Name))
if err != nil {
log.Error("Error whilst opening git repo for %-v", repo)
log.Error("Error whilst opening git repo for [%d]%s/%s. Error: %v", repo.ID, repo.OwnerName, repo.Name, err)
return err
}
}
@@ -119,18 +119,36 @@ func fixPublisherIDforTagReleases(x *xorm.Engine) error {
commit, err := gitRepo.GetTagCommit(release.TagName)
if err != nil {
if git.IsErrNotExist(err) {
log.Warn("Unable to find commit %s for Tag: %s in %-v. Cannot update publisher ID.", err.(git.ErrNotExist).ID, release.TagName, repo)
log.Warn("Unable to find commit %s for Tag: %s in [%d]%s/%s. Cannot update publisher ID.", err.(git.ErrNotExist).ID, release.TagName, repo.ID, repo.OwnerName, repo.Name)
continue
}
log.Error("Error whilst getting commit for Tag: %s in %-v.", release.TagName, repo)
log.Error("Error whilst getting commit for Tag: %s in [%d]%s/%s. Error: %v", release.TagName, repo.ID, repo.OwnerName, repo.Name, err)
return fmt.Errorf("GetTagCommit: %v", err)
}
if commit.Author.Email == "" {
log.Warn("Tag: %s in Repo[%d]%s/%s does not have a tagger.", release.TagName, repo.ID, repo.OwnerName, repo.Name)
commit, err = gitRepo.GetCommit(commit.ID.String())
if err != nil {
if git.IsErrNotExist(err) {
log.Warn("Unable to find commit %s for Tag: %s in [%d]%s/%s. Cannot update publisher ID.", err.(git.ErrNotExist).ID, release.TagName, repo.ID, repo.OwnerName, repo.Name)
continue
}
log.Error("Error whilst getting commit for Tag: %s in [%d]%s/%s. Error: %v", release.TagName, repo.ID, repo.OwnerName, repo.Name, err)
return fmt.Errorf("GetCommit: %v", err)
}
}
if commit.Author.Email == "" {
log.Warn("Tag: %s in Repo[%d]%s/%s does not have a Tagger and its underlying commit does not have an Author either!", release.TagName, repo.ID, repo.OwnerName, repo.Name)
continue
}
if user == nil || !strings.EqualFold(user.Email, commit.Author.Email) {
user = new(User)
_, err = sess.Where("email=?", commit.Author.Email).Get(user)
if err != nil {
log.Error("Error whilst getting commit author by email: %s for Tag: %s in %-v.", commit.Author.Email, release.TagName, repo)
log.Error("Error whilst getting commit author by email: %s for Tag: %s in [%d]%s/%s. Error: %v", commit.Author.Email, release.TagName, repo.ID, repo.OwnerName, repo.Name, err)
return err
}
@@ -143,7 +161,7 @@ func fixPublisherIDforTagReleases(x *xorm.Engine) error {
release.PublisherID = user.ID
if _, err := sess.ID(release.ID).Cols("publisher_id").Update(release); err != nil {
log.Error("Error whilst updating publisher[%d] for release[%d] with tag name %s", release.PublisherID, release.ID, release.TagName)
log.Error("Error whilst updating publisher[%d] for release[%d] with tag name %s. Error: %v", release.PublisherID, release.ID, release.TagName, err)
return err
}
}

View File

@@ -1349,6 +1349,26 @@ func UpdateRepository(repo *Repository, visibilityChanged bool) (err error) {
return sess.Commit()
}
// UpdateRepositoryOwnerNames updates repository owner_names (this should only be used when the ownerName has changed case)
func UpdateRepositoryOwnerNames(ownerID int64, ownerName string) error {
if ownerID == 0 {
return nil
}
sess := x.NewSession()
defer sess.Close()
if err := sess.Begin(); err != nil {
return err
}
if _, err := sess.Where("owner_id = ?", ownerID).Cols("owner_name").Update(&Repository{
OwnerName: ownerName,
}); err != nil {
return err
}
return sess.Commit()
}
// UpdateRepositoryUpdatedTime updates a repository's updated time
func UpdateRepositoryUpdatedTime(repoID int64, updateTime time.Time) error {
_, err := x.Exec("UPDATE repository SET updated_unix = ? WHERE id = ?", updateTime.Unix(), repoID)

View File

@@ -8,8 +8,11 @@ import (
"fmt"
migration "code.gitea.io/gitea/modules/migrations/base"
"code.gitea.io/gitea/modules/secret"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/util"
jsoniter "github.com/json-iterator/go"
"xorm.io/builder"
@@ -110,6 +113,24 @@ func (task *Task) MigrateConfig() (*migration.MigrateOptions, error) {
if err != nil {
return nil, err
}
// decrypt credentials
if opts.CloneAddrEncrypted != "" {
if opts.CloneAddr, err = secret.DecryptSecret(setting.SecretKey, opts.CloneAddrEncrypted); err != nil {
return nil, err
}
}
if opts.AuthPasswordEncrypted != "" {
if opts.AuthPassword, err = secret.DecryptSecret(setting.SecretKey, opts.AuthPasswordEncrypted); err != nil {
return nil, err
}
}
if opts.AuthTokenEncrypted != "" {
if opts.AuthToken, err = secret.DecryptSecret(setting.SecretKey, opts.AuthTokenEncrypted); err != nil {
return nil, err
}
}
return &opts, nil
}
return nil, fmt.Errorf("Task type is %s, not Migrate Repo", task.Type.Name())
@@ -205,12 +226,31 @@ func createTask(e Engine, task *Task) error {
func FinishMigrateTask(task *Task) error {
task.Status = structs.TaskStatusFinished
task.EndTime = timeutil.TimeStampNow()
// delete credentials when we're done, they're a liability.
conf, err := task.MigrateConfig()
if err != nil {
return err
}
conf.AuthPassword = ""
conf.AuthToken = ""
conf.CloneAddr = util.SanitizeURLCredentials(conf.CloneAddr, true)
conf.AuthPasswordEncrypted = ""
conf.AuthTokenEncrypted = ""
conf.CloneAddrEncrypted = ""
json := jsoniter.ConfigCompatibleWithStandardLibrary
confBytes, err := json.Marshal(conf)
if err != nil {
return err
}
task.PayloadContent = string(confBytes)
sess := x.NewSession()
defer sess.Close()
if err := sess.Begin(); err != nil {
return err
}
if _, err := sess.ID(task.ID).Cols("status", "end_time").Update(task); err != nil {
if _, err := sess.ID(task.ID).Cols("status", "end_time", "payload_content").Update(task); err != nil {
return err
}

View File

@@ -57,9 +57,15 @@ func GetAccessTokenBySHA(token string) (*AccessToken, error) {
if token == "" {
return nil, ErrAccessTokenEmpty{}
}
if len(token) < 8 {
// A token is defined as being SHA1 sum these are 40 hexadecimal bytes long
if len(token) != 40 {
return nil, ErrAccessTokenNotExist{token}
}
for _, x := range []byte(token) {
if x < '0' || (x > '9' && x < 'a') || x > 'f' {
return nil, ErrAccessTokenNotExist{token}
}
}
var tokens []AccessToken
lastEight := token[len(token)-8:]
err := x.Table(&AccessToken{}).Where("token_last_eight = ?", lastEight).Find(&tokens)

View File

@@ -21,6 +21,7 @@ import (
"strings"
"time"
"unicode"
"unicode/utf8"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log"
@@ -213,19 +214,19 @@ func EllipsisString(str string, length int) string {
if length <= 3 {
return "..."
}
if len(str) <= length {
if utf8.RuneCountInString(str) <= length {
return str
}
return str[:length-3] + "..."
return string([]rune(str)[:length-3]) + "..."
}
// TruncateString returns a truncated string with given limit,
// it returns input string if length is not reached limit.
func TruncateString(str string, limit int) string {
if len(str) < limit {
if utf8.RuneCountInString(str) < limit {
return str
}
return str[:limit]
return string([]rune(str)[:limit])
}
// StringsToInt64s converts a slice of string to a slice of int64.

View File

@@ -170,6 +170,10 @@ func TestEllipsisString(t *testing.T) {
assert.Equal(t, "fo...", EllipsisString("foobar", 5))
assert.Equal(t, "foobar", EllipsisString("foobar", 6))
assert.Equal(t, "foobar", EllipsisString("foobar", 10))
assert.Equal(t, "测...", EllipsisString("测试文本一二三四", 4))
assert.Equal(t, "测试...", EllipsisString("测试文本一二三四", 5))
assert.Equal(t, "测试文...", EllipsisString("测试文本一二三四", 6))
assert.Equal(t, "测试文本一二三四", EllipsisString("测试文本一二三四", 10))
}
func TestTruncateString(t *testing.T) {
@@ -181,6 +185,10 @@ func TestTruncateString(t *testing.T) {
assert.Equal(t, "fooba", TruncateString("foobar", 5))
assert.Equal(t, "foobar", TruncateString("foobar", 6))
assert.Equal(t, "foobar", TruncateString("foobar", 7))
assert.Equal(t, "测试文本", TruncateString("测试文本一二三四", 4))
assert.Equal(t, "测试文本一", TruncateString("测试文本一二三四", 5))
assert.Equal(t, "测试文本一二", TruncateString("测试文本一二三四", 6))
assert.Equal(t, "测试文本一二三", TruncateString("测试文本一二三四", 7))
}
func TestStringsToInt64s(t *testing.T) {

View File

@@ -49,7 +49,7 @@ func (r *Response) Write(bs []byte) (int, error) {
return size, err
}
if r.status == 0 {
r.WriteHeader(200)
r.status = http.StatusOK
}
return size, nil
}

View File

@@ -89,7 +89,7 @@ func innerToRepo(repo *models.Repository, mode models.AccessMode, isParent bool)
return nil
}
numReleases, _ := models.GetReleaseCountByRepoID(repo.ID, models.FindReleasesOptions{IncludeDrafts: false, IncludeTags: true})
numReleases, _ := models.GetReleaseCountByRepoID(repo.ID, models.FindReleasesOptions{IncludeDrafts: false, IncludeTags: false})
mirrorInterval := ""
if repo.IsMirror {

View File

@@ -23,7 +23,7 @@ func checkDBVersion(logger log.Logger, autofix bool) error {
err = models.NewEngine(context.Background(), migrations.Migrate)
if err != nil {
logger.Critical("Error: %v during migration")
logger.Critical("Error: %v during migration", err)
}
return err
}

View File

@@ -6,6 +6,7 @@
package emoji
import (
"io"
"sort"
"strings"
"sync"
@@ -145,6 +146,8 @@ func (n *rememberSecondWriteWriter) Write(p []byte) (int, error) {
if n.writecount == 2 {
n.idx = n.pos
n.end = n.pos + len(p)
n.pos += len(p)
return len(p), io.EOF
}
n.pos += len(p)
return len(p), nil
@@ -155,6 +158,8 @@ func (n *rememberSecondWriteWriter) WriteString(s string) (int, error) {
if n.writecount == 2 {
n.idx = n.pos
n.end = n.pos + len(s)
n.pos += len(s)
return len(s), io.EOF
}
n.pos += len(s)
return len(s), nil

View File

@@ -51,6 +51,7 @@ type AuthenticationForm struct {
TLS bool
SkipVerify bool
PAMServiceName string
PAMEmailDomain string
Oauth2Provider string
Oauth2Key string
Oauth2Secret string

View File

@@ -149,17 +149,18 @@ headerLoop:
// constant hextable to help quickly convert between 20byte and 40byte hashes
const hextable = "0123456789abcdef"
// To40ByteSHA converts a 20-byte SHA in a 40-byte slice into a 40-byte sha in place
// without allocations. This is at least 100x quicker that hex.EncodeToString
// NB This requires that sha is a 40-byte slice
func To40ByteSHA(sha []byte) []byte {
// To40ByteSHA converts a 20-byte SHA into a 40-byte sha. Input and output can be the
// same 40 byte slice to support in place conversion without allocations.
// This is at least 100x quicker that hex.EncodeToString
// NB This requires that out is a 40-byte slice
func To40ByteSHA(sha, out []byte) []byte {
for i := 19; i >= 0; i-- {
v := sha[i]
vhi, vlo := v>>4, v&0x0f
shi, slo := hextable[vhi], hextable[vlo]
sha[i*2], sha[i*2+1] = shi, slo
out[i*2], out[i*2+1] = shi, slo
}
return sha
return out
}
// ParseTreeLineSkipMode reads an entry from a tree in a cat-file --batch stream

View File

@@ -124,12 +124,18 @@ func (c *Command) RunInDirTimeoutEnvFullPipelineFunc(env []string, timeout time.
cmd := exec.CommandContext(ctx, c.name, c.args...)
if env == nil {
cmd.Env = append(os.Environ(), fmt.Sprintf("LC_ALL=%s", DefaultLocale))
cmd.Env = os.Environ()
} else {
cmd.Env = env
cmd.Env = append(cmd.Env, fmt.Sprintf("LC_ALL=%s", DefaultLocale))
}
cmd.Env = append(
cmd.Env,
fmt.Sprintf("LC_ALL=%s", DefaultLocale),
// avoid prompting for credentials interactively, supported since git v2.3
"GIT_TERMINAL_PROMPT=0",
)
// TODO: verify if this is still needed in golang 1.15
if goVersionLessThan115 {
cmd.Env = append(cmd.Env, "GODEBUG=asyncpreemptoff=1")

View File

@@ -303,7 +303,7 @@ revListLoop:
commits[0] = string(commitID)
}
}
treeID = To40ByteSHA(treeID)
treeID = To40ByteSHA(treeID, treeID)
_, err = batchStdinWriter.Write(treeID)
if err != nil {
return nil, err

View File

@@ -18,6 +18,8 @@ import (
func CommitFromReader(gitRepo *Repository, sha SHA1, reader io.Reader) (*Commit, error) {
commit := &Commit{
ID: sha,
Author: &Signature{},
Committer: &Signature{},
}
payloadSB := new(strings.Builder)

View File

@@ -43,8 +43,6 @@ func FindLFSFile(repo *git.Repository, hash git.SHA1) ([]*LFSResult, error) {
basePath := repo.Path
hashStr := hash.String()
// Use rev-list to provide us with all commits in order
revListReader, revListWriter := io.Pipe()
defer func() {
@@ -74,7 +72,7 @@ func FindLFSFile(repo *git.Repository, hash git.SHA1) ([]*LFSResult, error) {
fnameBuf := make([]byte, 4096)
modeBuf := make([]byte, 40)
workingShaBuf := make([]byte, 40)
workingShaBuf := make([]byte, 20)
for scan.Scan() {
// Get the next commit ID
@@ -132,8 +130,7 @@ func FindLFSFile(repo *git.Repository, hash git.SHA1) ([]*LFSResult, error) {
return nil, err
}
n += int64(count)
sha := git.To40ByteSHA(sha20byte)
if bytes.Equal(sha, []byte(hashStr)) {
if bytes.Equal(sha20byte, hash[:]) {
result := LFSResult{
Name: curPath + string(fname),
SHA: curCommit.ID.String(),
@@ -143,7 +140,9 @@ func FindLFSFile(repo *git.Repository, hash git.SHA1) ([]*LFSResult, error) {
}
resultsMap[curCommit.ID.String()+":"+curPath+string(fname)] = &result
} else if string(mode) == git.EntryModeTree.String() {
trees = append(trees, sha)
sha40Byte := make([]byte, 40)
git.To40ByteSHA(sha20byte, sha40Byte)
trees = append(trees, sha40Byte)
paths = append(paths, curPath+string(fname)+"/")
}
}

View File

@@ -7,23 +7,18 @@ package git
import (
"path/filepath"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func TestGetLatestCommitTime(t *testing.T) {
lct, err := GetLatestCommitTime(".")
bareRepo1Path := filepath.Join(testReposDir, "repo1_bare")
lct, err := GetLatestCommitTime(bareRepo1Path)
assert.NoError(t, err)
// Time is in the past
now := time.Now()
assert.True(t, lct.Unix() < now.Unix(), "%d not smaller than %d", lct, now)
// Time is after Mon Oct 23 03:52:09 2017 +0300
// Time is Sun Jul 21 22:43:13 2019 +0200
// which is the time of commit
// d47b98c44c9a6472e44ab80efe65235e11c6da2a
refTime, err := time.Parse("Mon Jan 02 15:04:05 2006 -0700", "Mon Oct 23 03:52:09 2017 +0300")
assert.NoError(t, err)
assert.True(t, lct.Unix() > refTime.Unix(), "%d not greater than %d", lct, refTime)
// feaf4ba6bc635fec442f46ddd4512416ec43c2c2 (refs/heads/master)
assert.EqualValues(t, 1563741793, lct.Unix())
}
func TestRepoIsEmpty(t *testing.T) {

View File

@@ -35,6 +35,7 @@ func (tag *Tag) Commit() (*Commit, error) {
// \n\n separate headers from message
func parseTagData(data []byte) (*Tag, error) {
tag := new(Tag)
tag.Tagger = &Signature{}
// we now have the contents of the commit object. Let's investigate...
nextline := 0
l:

View File

@@ -17,6 +17,7 @@ import (
"time"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting"
)
var (
@@ -26,6 +27,10 @@ var (
DefaultWriteTimeOut time.Duration
// DefaultMaxHeaderBytes default max header bytes
DefaultMaxHeaderBytes int
// PerWriteWriteTimeout timeout for writes
PerWriteWriteTimeout = 30 * time.Second
// PerWriteWriteTimeoutKbTime is a timeout taking account of how much there is to be written
PerWriteWriteTimeoutKbTime = 10 * time.Second
)
func init() {
@@ -45,6 +50,8 @@ type Server struct {
lock *sync.RWMutex
BeforeBegin func(network, address string)
OnShutdown func()
PerWriteTimeout time.Duration
PerWritePerKbTimeout time.Duration
}
// NewServer creates a server on network at provided address
@@ -60,6 +67,8 @@ func NewServer(network, address, name string) *Server {
lock: &sync.RWMutex{},
network: network,
address: address,
PerWriteTimeout: setting.PerWriteTimeout,
PerWritePerKbTimeout: setting.PerWritePerKbTimeout,
}
srv.BeforeBegin = func(network, addr string) {
@@ -224,6 +233,8 @@ func (wl *wrappedListener) Accept() (net.Conn, error) {
Conn: c,
server: wl.server,
closed: &closed,
perWriteTimeout: wl.server.PerWriteTimeout,
perWritePerKbTimeout: wl.server.PerWritePerKbTimeout,
}
wl.server.wg.Add(1)
@@ -248,6 +259,23 @@ type wrappedConn struct {
net.Conn
server *Server
closed *int32
deadline time.Time
perWriteTimeout time.Duration
perWritePerKbTimeout time.Duration
}
func (w wrappedConn) Write(p []byte) (n int, err error) {
if w.perWriteTimeout > 0 {
minTimeout := time.Duration(len(p)/1024) * w.perWritePerKbTimeout
minDeadline := time.Now().Add(minTimeout).Add(w.perWriteTimeout)
w.deadline = w.deadline.Add(minTimeout)
if minDeadline.After(w.deadline) {
w.deadline = minDeadline
}
_ = w.Conn.SetWriteDeadline(w.deadline)
}
return w.Conn.Write(p)
}
func (w wrappedConn) Close() error {

View File

@@ -87,6 +87,7 @@ func isLinkStr(link string) bool {
return validLinksPattern.MatchString(link)
}
// FIXME: This function is not concurrent safe
func getIssueFullPattern() *regexp.Regexp {
if issueFullPattern == nil {
issueFullPattern = regexp.MustCompile(regexp.QuoteMeta(setting.AppURL) +
@@ -403,24 +404,19 @@ func (ctx *postProcessCtx) visitNode(node *html.Node, visitText bool) {
}
case html.ElementNode:
if node.Data == "img" {
attrs := node.Attr
for idx, attr := range attrs {
for _, attr := range node.Attr {
if attr.Key != "src" {
continue
}
link := []byte(attr.Val)
if len(link) > 0 && !IsLink(link) {
if len(attr.Val) > 0 && !isLinkStr(attr.Val) && !strings.HasPrefix(attr.Val, "data:image/") {
prefix := ctx.urlPrefix
if ctx.isWikiMarkdown {
prefix = util.URLJoin(prefix, "wiki", "raw")
}
prefix = strings.Replace(prefix, "/src/", "/media/", 1)
lnk := string(link)
lnk = util.URLJoin(prefix, lnk)
link = []byte(lnk)
attr.Val = util.URLJoin(prefix, attr.Val)
}
node.Attr[idx].Val = string(link)
}
} else if node.Data == "a" {
visitText = false
@@ -610,11 +606,16 @@ func replaceContentList(node *html.Node, i, j int, newNodes []*html.Node) {
}
func mentionProcessor(ctx *postProcessCtx, node *html.Node) {
start := 0
next := node.NextSibling
for node != nil && node != next && start < len(node.Data) {
// We replace only the first mention; other mentions will be addressed later
found, loc := references.FindFirstMentionBytes([]byte(node.Data))
found, loc := references.FindFirstMentionBytes([]byte(node.Data[start:]))
if !found {
return
}
loc.Start += start
loc.End += start
mention := node.Data[loc.Start:loc.End]
var teams string
teams, ok := ctx.metas["teams"]
@@ -626,10 +627,17 @@ func mentionProcessor(ctx *postProcessCtx, node *html.Node) {
mentionOrgAndTeam := strings.Split(mention, "/")
if mentionOrgAndTeam[0][1:] == ctx.metas["org"] && strings.Contains(teams, ","+strings.ToLower(mentionOrgAndTeam[1])+",") {
replaceContent(node, loc.Start, loc.End, createLink(util.URLJoin(setting.AppURL, "org", ctx.metas["org"], "teams", mentionOrgAndTeam[1]), mention, "mention"))
node = node.NextSibling.NextSibling
start = 0
continue
}
return
start = loc.End
continue
}
replaceContent(node, loc.Start, loc.End, createLink(util.URLJoin(setting.AppURL, mention[1:]), mention, "mention"))
node = node.NextSibling.NextSibling
start = 0
}
}
func shortLinkProcessor(ctx *postProcessCtx, node *html.Node) {
@@ -637,6 +645,8 @@ func shortLinkProcessor(ctx *postProcessCtx, node *html.Node) {
}
func shortLinkProcessorFull(ctx *postProcessCtx, node *html.Node, noLink bool) {
next := node.NextSibling
for node != nil && node != next {
m := shortLinkPattern.FindStringSubmatchIndex(node.Data)
if m == nil {
return
@@ -716,7 +726,7 @@ func shortLinkProcessorFull(ctx *postProcessCtx, node *html.Node, noLink bool) {
switch ext := filepath.Ext(link); ext {
// fast path: empty string, ignore
case "":
break
// leave image as false
case ".jpg", ".jpeg", ".png", ".tif", ".tiff", ".webp", ".gif", ".bmp", ".ico", ".svg":
image = true
}
@@ -792,12 +802,16 @@ func shortLinkProcessorFull(ctx *postProcessCtx, node *html.Node, noLink bool) {
linkNode.Attr = []html.Attribute{{Key: "href", Val: link}}
}
replaceContent(node, m[0], m[1], linkNode)
node = node.NextSibling.NextSibling
}
}
func fullIssuePatternProcessor(ctx *postProcessCtx, node *html.Node) {
if ctx.metas == nil {
return
}
next := node.NextSibling
for node != nil && node != next {
m := getIssueFullPattern().FindStringSubmatchIndex(node.Data)
if m == nil {
return
@@ -815,23 +829,25 @@ func fullIssuePatternProcessor(ctx *postProcessCtx, node *html.Node) {
// TODO if m[4]:m[5] is not nil, then link is to a comment,
// and we should indicate that in the text somehow
replaceContent(node, m[0], m[1], createLink(link, id, "ref-issue"))
} else {
orgRepoID := matchOrg + "/" + matchRepo + id
replaceContent(node, m[0], m[1], createLink(link, orgRepoID, "ref-issue"))
}
node = node.NextSibling.NextSibling
}
}
func issueIndexPatternProcessor(ctx *postProcessCtx, node *html.Node) {
if ctx.metas == nil {
return
}
var (
found bool
ref *references.RenderizableReference
)
next := node.NextSibling
for node != nil && node != next {
_, exttrack := ctx.metas["format"]
alphanum := ctx.metas["style"] == IssueNameStyleAlphanumeric
@@ -872,7 +888,8 @@ func issueIndexPatternProcessor(ctx *postProcessCtx, node *html.Node) {
if ref.Action == references.XRefActionNone {
replaceContent(node, ref.RefLocation.Start, ref.RefLocation.End, link)
return
node = node.NextSibling.NextSibling
continue
}
// Decorate action keywords if actionable
@@ -890,6 +907,8 @@ func issueIndexPatternProcessor(ctx *postProcessCtx, node *html.Node) {
Data: node.Data[ref.ActionLocation.End:ref.RefLocation.Start],
}
replaceContentList(node, ref.ActionLocation.Start, ref.RefLocation.End, []*html.Node{keyword, spaces, link})
node = node.NextSibling.NextSibling.NextSibling.NextSibling
}
}
// fullSha1PatternProcessor renders SHA containing URLs
@@ -897,6 +916,9 @@ func fullSha1PatternProcessor(ctx *postProcessCtx, node *html.Node) {
if ctx.metas == nil {
return
}
next := node.NextSibling
for node != nil && node != next {
m := anySHA1Pattern.FindStringSubmatchIndex(node.Data)
if m == nil {
return
@@ -941,15 +963,23 @@ func fullSha1PatternProcessor(ctx *postProcessCtx, node *html.Node) {
}
replaceContent(node, start, end, createCodeLink(urlFull, text, "commit"))
node = node.NextSibling.NextSibling
}
}
// emojiShortCodeProcessor for rendering text like :smile: into emoji
func emojiShortCodeProcessor(ctx *postProcessCtx, node *html.Node) {
m := EmojiShortCodeRegex.FindStringSubmatchIndex(node.Data)
start := 0
next := node.NextSibling
for node != nil && node != next && start < len(node.Data) {
m := EmojiShortCodeRegex.FindStringSubmatchIndex(node.Data[start:])
if m == nil {
return
}
m[0] += start
m[1] += start
start = m[1]
alias := node.Data[m[0]:m[1]]
alias = strings.ReplaceAll(alias, ":", "")
@@ -959,25 +989,39 @@ func emojiShortCodeProcessor(ctx *postProcessCtx, node *html.Node) {
s := strings.Join(setting.UI.Reactions, " ") + "gitea"
if strings.Contains(s, alias) {
replaceContent(node, m[0], m[1], createCustomEmoji(alias, "emoji"))
return
node = node.NextSibling.NextSibling
start = 0
continue
}
return
continue
}
replaceContent(node, m[0], m[1], createEmoji(converted.Emoji, "emoji", converted.Description))
node = node.NextSibling.NextSibling
start = 0
}
}
// emoji processor to match emoji and add emoji class
func emojiProcessor(ctx *postProcessCtx, node *html.Node) {
m := emoji.FindEmojiSubmatchIndex(node.Data)
start := 0
next := node.NextSibling
for node != nil && node != next && start < len(node.Data) {
m := emoji.FindEmojiSubmatchIndex(node.Data[start:])
if m == nil {
return
}
m[0] += start
m[1] += start
codepoint := node.Data[m[0]:m[1]]
start = m[1]
val := emoji.FromCode(codepoint)
if val != nil {
replaceContent(node, m[0], m[1], createEmoji(codepoint, "emoji", val.Description))
node = node.NextSibling.NextSibling
start = 0
}
}
}
@@ -987,10 +1031,17 @@ func sha1CurrentPatternProcessor(ctx *postProcessCtx, node *html.Node) {
if ctx.metas == nil || ctx.metas["user"] == "" || ctx.metas["repo"] == "" || ctx.metas["repoPath"] == "" {
return
}
m := sha1CurrentPattern.FindStringSubmatchIndex(node.Data)
start := 0
next := node.NextSibling
for node != nil && node != next && start < len(node.Data) {
m := sha1CurrentPattern.FindStringSubmatchIndex(node.Data[start:])
if m == nil {
return
}
m[2] += start
m[3] += start
hash := node.Data[m[2]:m[3]]
// The regex does not lie, it matches the hash pattern.
// However, a regex cannot know if a hash actually exists or not.
@@ -1004,32 +1055,45 @@ func sha1CurrentPatternProcessor(ctx *postProcessCtx, node *html.Node) {
if !strings.Contains(err.Error(), "fatal: Needed a single revision") {
log.Debug("sha1CurrentPatternProcessor git rev-parse: %v", err)
}
return
start = m[3]
continue
}
replaceContent(node, m[2], m[3],
createCodeLink(util.URLJoin(setting.AppURL, ctx.metas["user"], ctx.metas["repo"], "commit", hash), base.ShortSha(hash), "commit"))
start = 0
node = node.NextSibling.NextSibling
}
}
// emailAddressProcessor replaces raw email addresses with a mailto: link.
func emailAddressProcessor(ctx *postProcessCtx, node *html.Node) {
next := node.NextSibling
for node != nil && node != next {
m := emailRegex.FindStringSubmatchIndex(node.Data)
if m == nil {
return
}
mail := node.Data[m[2]:m[3]]
replaceContent(node, m[2], m[3], createLink("mailto:"+mail, mail, "mailto"))
node = node.NextSibling.NextSibling
}
}
// linkProcessor creates links for any HTTP or HTTPS URL not captured by
// markdown.
func linkProcessor(ctx *postProcessCtx, node *html.Node) {
next := node.NextSibling
for node != nil && node != next {
m := common.LinkRegex.FindStringIndex(node.Data)
if m == nil {
return
}
uri := node.Data[m[0]:m[1]]
replaceContent(node, m[0], m[1], createLink(uri, uri, "link"))
node = node.NextSibling.NextSibling
}
}
func genDefaultLinkProcessor(defaultLink string) processor {
@@ -1053,12 +1117,17 @@ func genDefaultLinkProcessor(defaultLink string) processor {
// descriptionLinkProcessor creates links for DescriptionHTML
func descriptionLinkProcessor(ctx *postProcessCtx, node *html.Node) {
next := node.NextSibling
for node != nil && node != next {
m := common.LinkRegex.FindStringIndex(node.Data)
if m == nil {
return
}
uri := node.Data[m[0]:m[1]]
replaceContent(node, m[0], m[1], createDescriptionLink(uri, uri))
node = node.NextSibling.NextSibling
}
}
func createDescriptionLink(href, content string) *html.Node {

View File

@@ -408,3 +408,36 @@ func Test_ParseClusterFuzz(t *testing.T) {
assert.NotContains(t, string(val), "<html")
}
func TestIssue16020(t *testing.T) {
setting.AppURL = AppURL
setting.AppSubURL = AppSubURL
var localMetas = map[string]string{
"user": "go-gitea",
"repo": "gitea",
}
data := `<img src="data:image/png;base64,i//V"/>`
// func PostProcess(rawHTML []byte, urlPrefix string, metas map[string]string, isWikiMarkdown bool) ([]byte, error)
res, err := PostProcess([]byte(data), "https://example.com", localMetas, false)
assert.NoError(t, err)
assert.Equal(t, data, string(res))
}
func BenchmarkEmojiPostprocess(b *testing.B) {
data := "🥰 "
for len(data) < 1<<16 {
data += data
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := PostProcess(
[]byte(data),
"https://example.com",
localMetas,
false)
assert.NoError(b, err)
}
}

View File

@@ -50,9 +50,6 @@ func ReplaceSanitizer() {
sanitizer.policy.AllowURLSchemes(setting.Markdown.CustomURLSchemes...)
}
// Allow keyword markup
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`^` + keywordClass + `$`)).OnElements("span")
// Allow classes for anchors
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`ref-issue`)).OnElements("a")
@@ -68,8 +65,8 @@ func ReplaceSanitizer() {
// Allow classes for emojis
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`emoji`)).OnElements("img")
// Allow icons, emojis, and chroma syntax on span
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`^((icon(\s+[\p{L}\p{N}_-]+)+)|(emoji))$|^([a-z][a-z0-9]{0,2})$`)).OnElements("span")
// Allow icons, emojis, chroma syntax and keyword markup on span
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`^((icon(\s+[\p{L}\p{N}_-]+)+)|(emoji))$|^([a-z][a-z0-9]{0,2})$|^` + keywordClass + `$`)).OnElements("span")
// Allow generally safe attributes
generalSafeAttrs := []string{"abbr", "accept", "accept-charset",

View File

@@ -12,9 +12,12 @@ import "code.gitea.io/gitea/modules/structs"
type MigrateOptions struct {
// required: true
CloneAddr string `json:"clone_addr" binding:"Required"`
CloneAddrEncrypted string `json:"clone_addr_encrypted,omitempty"`
AuthUsername string `json:"auth_username"`
AuthPassword string `json:"auth_password"`
AuthToken string `json:"auth_token"`
AuthPassword string `json:"auth_password,omitempty"`
AuthPasswordEncrypted string `json:"auth_password_encrypted,omitempty"`
AuthToken string `json:"auth_token,omitempty"`
AuthTokenEncrypted string `json:"auth_token_encrypted,omitempty"`
// required: true
UID int `json:"uid" binding:"Required"`
// required: true

View File

@@ -13,6 +13,7 @@ import (
"os"
"path/filepath"
"strconv"
"strings"
"time"
"code.gitea.io/gitea/models"
@@ -563,8 +564,42 @@ func DumpRepository(ctx context.Context, baseDir, ownerName string, opts base.Mi
return nil
}
func updateOptionsUnits(opts *base.MigrateOptions, units []string) {
if len(units) == 0 {
opts.Wiki = true
opts.Issues = true
opts.Milestones = true
opts.Labels = true
opts.Releases = true
opts.Comments = true
opts.PullRequests = true
opts.ReleaseAssets = true
} else {
for _, unit := range units {
switch strings.ToLower(unit) {
case "wiki":
opts.Wiki = true
case "issues":
opts.Issues = true
case "milestones":
opts.Milestones = true
case "labels":
opts.Labels = true
case "releases":
opts.Releases = true
case "release_assets":
opts.ReleaseAssets = true
case "comments":
opts.Comments = true
case "pull_requests":
opts.PullRequests = true
}
}
}
}
// RestoreRepository restore a repository from the disk directory
func RestoreRepository(ctx context.Context, baseDir string, ownerName, repoName string) error {
func RestoreRepository(ctx context.Context, baseDir string, ownerName, repoName string, units []string) error {
doer, err := models.GetAdminUser()
if err != nil {
return err
@@ -580,17 +615,12 @@ func RestoreRepository(ctx context.Context, baseDir string, ownerName, repoName
}
tp, _ := strconv.Atoi(opts["service_type"])
if err = migrateRepository(downloader, uploader, base.MigrateOptions{
Wiki: true,
Issues: true,
Milestones: true,
Labels: true,
Releases: true,
Comments: true,
PullRequests: true,
ReleaseAssets: true,
var migrateOpts = base.MigrateOptions{
GitServiceType: structs.GitServiceType(tp),
}); err != nil {
}
updateOptionsUnits(&migrateOpts, units)
if err = migrateRepository(downloader, uploader, migrateOpts); err != nil {
if err1 := uploader.Rollback(); err1 != nil {
log.Error("rollback failed: %v", err1)
}

View File

@@ -248,7 +248,8 @@ func (g *GiteaLocalUploader) CreateReleases(releases ...*base.Release) error {
rel.OriginalAuthorID = release.PublisherID
}
// calc NumCommits
// calc NumCommits if no draft
if !release.Draft {
commit, err := g.gitRepo.GetCommit(rel.TagName)
if err != nil {
return fmt.Errorf("GetCommit: %v", err)
@@ -257,6 +258,7 @@ func (g *GiteaLocalUploader) CreateReleases(releases ...*base.Release) error {
if err != nil {
return fmt.Errorf("CommitsCount: %v", err)
}
}
for _, asset := range release.Assets {
var attach = models.Attachment{
@@ -268,9 +270,10 @@ func (g *GiteaLocalUploader) CreateReleases(releases ...*base.Release) error {
}
// download attachment
err = func() error {
err := func() error {
// asset.DownloadURL maybe a local file
var rc io.ReadCloser
var err error
if asset.DownloadURL == nil {
rc, err = asset.DownloadFunc()
if err != nil {
@@ -849,6 +852,7 @@ func (g *GiteaLocalUploader) CreateReviews(reviews ...*base.Review) error {
// Rollback when migrating failed, this will rollback all the changes.
func (g *GiteaLocalUploader) Rollback() error {
if g.repo != nil && g.repo.ID > 0 {
g.gitRepo.Close()
if err := models.DeleteRepository(g.doer, g.repo.OwnerID, g.repo.ID); err != nil {
return err
}

View File

@@ -264,34 +264,29 @@ func (g *GithubDownloaderV3) GetLabels() ([]*base.Label, error) {
}
func (g *GithubDownloaderV3) convertGithubRelease(rel *github.RepositoryRelease) *base.Release {
var (
name string
desc string
)
if rel.Body != nil {
desc = *rel.Body
}
if rel.Name != nil {
name = *rel.Name
}
var email string
if rel.Author.Email != nil {
email = *rel.Author.Email
}
r := &base.Release{
TagName: *rel.TagName,
TargetCommitish: *rel.TargetCommitish,
Name: name,
Body: desc,
Draft: *rel.Draft,
Prerelease: *rel.Prerelease,
Created: rel.CreatedAt.Time,
PublisherID: *rel.Author.ID,
PublisherName: *rel.Author.Login,
PublisherEmail: email,
Published: rel.PublishedAt.Time,
}
if rel.Body != nil {
r.Body = *rel.Body
}
if rel.Name != nil {
r.Name = *rel.Name
}
if rel.Author.Email != nil {
r.PublisherEmail = *rel.Author.Email
}
if rel.PublishedAt != nil {
r.Published = rel.PublishedAt.Time
}
for _, asset := range rel.Assets {
@@ -306,18 +301,17 @@ func (g *GithubDownloaderV3) convertGithubRelease(rel *github.RepositoryRelease)
Updated: asset.UpdatedAt.Time,
DownloadFunc: func() (io.ReadCloser, error) {
g.sleep()
asset, redir, err := g.client.Repositories.DownloadReleaseAsset(g.ctx, g.repoOwner, g.repoName, assetID, nil)
asset, redirectURL, err := g.client.Repositories.DownloadReleaseAsset(g.ctx, g.repoOwner, g.repoName, assetID, nil)
if err != nil {
return nil, err
}
err = g.RefreshRate()
if err != nil {
if err := g.RefreshRate(); err != nil {
log.Error("g.client.RateLimits: %s", err)
}
if asset == nil {
if redir != "" {
if redirectURL != "" {
g.sleep()
req, err := http.NewRequestWithContext(g.ctx, "GET", redir, nil)
req, err := http.NewRequestWithContext(g.ctx, "GET", redirectURL, nil)
if err != nil {
return nil, err
}

View File

@@ -54,7 +54,6 @@ func (m *mailNotifier) NotifyNewIssue(issue *models.Issue, mentions []*models.Us
func (m *mailNotifier) NotifyIssueChangeStatus(doer *models.User, issue *models.Issue, actionComment *models.Comment, isClosed bool) {
var actionType models.ActionType
issue.Content = ""
if issue.IsPull {
if isClosed {
actionType = models.ActionClosePullRequest
@@ -120,7 +119,6 @@ func (m *mailNotifier) NotifyMergePullRequest(pr *models.PullRequest, doer *mode
log.Error("pr.LoadIssue: %v", err)
return
}
pr.Issue.Content = ""
if err := mailer.MailParticipants(pr.Issue, doer, models.ActionMergePullRequest, nil); err != nil {
log.Error("MailParticipants: %v", err)
}
@@ -147,7 +145,6 @@ func (m *mailNotifier) NotifyPullRequestPushCommits(doer *models.User, pr *model
if err := comment.LoadPushCommits(); err != nil {
log.Error("comment.LoadPushCommits: %v", err)
}
comment.Content = ""
m.NotifyCreateIssueComment(doer, comment.Issue.Repo, comment.Issue, comment, nil)
}

View File

@@ -0,0 +1,60 @@
// Copyright 2020 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package private
import (
"fmt"
"io/ioutil"
"net/http"
"time"
"code.gitea.io/gitea/modules/setting"
jsoniter "github.com/json-iterator/go"
)
// RestoreParams structure holds a data for restore repository
type RestoreParams struct {
RepoDir string
OwnerName string
RepoName string
Units []string
}
// RestoreRepo calls the internal RestoreRepo function
func RestoreRepo(repoDir, ownerName, repoName string, units []string) (int, string) {
reqURL := setting.LocalURL + "api/internal/restore_repo"
req := newInternalRequest(reqURL, "POST")
req.SetTimeout(3*time.Second, 0) // since the request will spend much time, don't timeout
req = req.Header("Content-Type", "application/json")
json := jsoniter.ConfigCompatibleWithStandardLibrary
jsonBytes, _ := json.Marshal(RestoreParams{
RepoDir: repoDir,
OwnerName: ownerName,
RepoName: repoName,
Units: units,
})
req.Body(jsonBytes)
resp, err := req.Response()
if err != nil {
return http.StatusInternalServerError, fmt.Sprintf("Unable to contact gitea: %v, could you confirm it's running?", err.Error())
}
defer resp.Body.Close()
if resp.StatusCode != 200 {
var ret = struct {
Err string `json:"err"`
}{}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return http.StatusInternalServerError, fmt.Sprintf("Response body error: %v", err.Error())
}
if err := json.Unmarshal(body, &ret); err != nil {
return http.StatusInternalServerError, fmt.Sprintf("Response body Unmarshal error: %v", err.Error())
}
}
return http.StatusOK, fmt.Sprintf("Restore repo %s/%s successfully", ownerName, repoName)
}

View File

@@ -198,17 +198,20 @@ func (m *Manager) FlushAll(baseCtx context.Context, timeout time.Duration) error
wg.Done()
}(mq)
} else {
log.Debug("Queue: %s is non-empty but is not flushable - adding 100 millisecond wait", mq.Name)
go func() {
<-time.After(100 * time.Millisecond)
log.Debug("Queue: %s is non-empty but is not flushable", mq.Name)
wg.Done()
}()
}
}
if allEmpty {
log.Debug("All queues are empty")
break
}
// Ensure there are always at least 100ms between loops but not more if we've actually been doing some flushign
// but don't delay cancellation here.
select {
case <-ctx.Done():
case <-time.After(100 * time.Millisecond):
}
wg.Wait()
}
return nil

View File

@@ -117,6 +117,8 @@ var (
GracefulRestartable bool
GracefulHammerTime time.Duration
StartupTimeout time.Duration
PerWriteTimeout = 30 * time.Second
PerWritePerKbTimeout = 10 * time.Second
StaticURLPrefix string
AbsoluteAssetURL string
@@ -147,6 +149,8 @@ var (
TrustedUserCAKeys []string `ini:"SSH_TRUSTED_USER_CA_KEYS"`
TrustedUserCAKeysFile string `ini:"SSH_TRUSTED_USER_CA_KEYS_FILENAME"`
TrustedUserCAKeysParsed []gossh.PublicKey `ini:"-"`
PerWriteTimeout time.Duration `ini:"SSH_PER_WRITE_TIMEOUT"`
PerWritePerKbTimeout time.Duration `ini:"SSH_PER_WRITE_PER_KB_TIMEOUT"`
}{
Disabled: false,
StartBuiltinServer: false,
@@ -159,6 +163,8 @@ var (
MinimumKeySizeCheck: true,
MinimumKeySizes: map[string]int{"ed25519": 256, "ed25519-sk": 256, "ecdsa": 256, "ecdsa-sk": 256, "rsa": 2048},
ServerHostKeys: []string{"ssh/gitea.rsa", "ssh/gogs.rsa"},
PerWriteTimeout: PerWriteTimeout,
PerWritePerKbTimeout: PerWritePerKbTimeout,
}
// Security settings
@@ -607,6 +613,8 @@ func NewContext() {
GracefulRestartable = sec.Key("ALLOW_GRACEFUL_RESTARTS").MustBool(true)
GracefulHammerTime = sec.Key("GRACEFUL_HAMMER_TIME").MustDuration(60 * time.Second)
StartupTimeout = sec.Key("STARTUP_TIMEOUT").MustDuration(0 * time.Second)
PerWriteTimeout = sec.Key("PER_WRITE_TIMEOUT").MustDuration(PerWriteTimeout)
PerWritePerKbTimeout = sec.Key("PER_WRITE_PER_KB_TIMEOUT").MustDuration(PerWritePerKbTimeout)
defaultAppURL := string(Protocol) + "://" + Domain
if (Protocol == HTTP && HTTPPort != "80") || (Protocol == HTTPS && HTTPPort != "443") {
@@ -772,6 +780,8 @@ func NewContext() {
}
SSH.ExposeAnonymous = sec.Key("SSH_EXPOSE_ANONYMOUS").MustBool(false)
SSH.PerWriteTimeout = sec.Key("SSH_PER_WRITE_TIMEOUT").MustDuration(PerWriteTimeout)
SSH.PerWritePerKbTimeout = sec.Key("SSH_PER_WRITE_PER_KB_TIMEOUT").MustDuration(PerWritePerKbTimeout)
if err = Cfg.Section("oauth2").MapTo(&OAuth2); err != nil {
log.Fatal("Failed to OAuth2 settings: %v", err)

View File

@@ -7,12 +7,15 @@ package ssh
import (
"code.gitea.io/gitea/modules/graceful"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting"
"github.com/gliderlabs/ssh"
)
func listen(server *ssh.Server) {
gracefulServer := graceful.NewServer("tcp", server.Addr, "SSH")
gracefulServer.PerWriteTimeout = setting.SSH.PerWriteTimeout
gracefulServer.PerWritePerKbTimeout = setting.SSH.PerWritePerKbTimeout
err := gracefulServer.ListenAndServe(server.Serve)
if err != nil {

View File

@@ -31,6 +31,8 @@ type CreateOrgOption struct {
RepoAdminChangeTeamAccess bool `json:"repo_admin_change_team_access"`
}
// TODO: make EditOrgOption fields optional after https://gitea.com/go-chi/binding/pulls/5 got merged
// EditOrgOption options for editing an organization
type EditOrgOption struct {
FullName string `json:"full_name"`
@@ -40,5 +42,5 @@ type EditOrgOption struct {
// possible values are `public`, `limited` or `private`
// enum: public,limited,private
Visibility string `json:"visibility" binding:"In(,public,limited,private)"`
RepoAdminChangeTeamAccess bool `json:"repo_admin_change_team_access"`
RepoAdminChangeTeamAccess *bool `json:"repo_admin_change_team_access"`
}

View File

@@ -13,8 +13,11 @@ import (
"code.gitea.io/gitea/modules/migrations/base"
"code.gitea.io/gitea/modules/queue"
repo_module "code.gitea.io/gitea/modules/repository"
"code.gitea.io/gitea/modules/secret"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/util"
jsoniter "github.com/json-iterator/go"
)
@@ -65,6 +68,24 @@ func MigrateRepository(doer, u *models.User, opts base.MigrateOptions) error {
// CreateMigrateTask creates a migrate task
func CreateMigrateTask(doer, u *models.User, opts base.MigrateOptions) (*models.Task, error) {
// encrypt credentials for persistence
var err error
opts.CloneAddrEncrypted, err = secret.EncryptSecret(setting.SecretKey, opts.CloneAddr)
if err != nil {
return nil, err
}
opts.CloneAddr = util.SanitizeURLCredentials(opts.CloneAddr, true)
opts.AuthPasswordEncrypted, err = secret.EncryptSecret(setting.SecretKey, opts.AuthPassword)
if err != nil {
return nil, err
}
opts.AuthPassword = ""
opts.AuthTokenEncrypted, err = secret.EncryptSecret(setting.SecretKey, opts.AuthToken)
if err != nil {
return nil, err
}
opts.AuthToken = ""
json := jsoniter.ConfigCompatibleWithStandardLibrary
bs, err := json.Marshal(&opts)
if err != nil {

View File

@@ -149,7 +149,7 @@ func SetCookie(resp http.ResponseWriter, name string, value string, others ...in
if len(others) > 2 {
if v, ok := others[2].(string); ok && len(v) > 0 {
cookie.Domain = v
} else if v, ok := others[1].(func(*http.Cookie)); ok {
} else if v, ok := others[2].(func(*http.Cookie)); ok {
v(&cookie)
}
}
@@ -170,7 +170,7 @@ func SetCookie(resp http.ResponseWriter, name string, value string, others ...in
if len(others) > 4 {
if v, ok := others[4].(bool); ok && v {
cookie.HttpOnly = true
} else if v, ok := others[1].(func(*http.Cookie)); ok {
} else if v, ok := others[4].(func(*http.Cookie)); ok {
v(&cookie)
}
}
@@ -179,7 +179,7 @@ func SetCookie(resp http.ResponseWriter, name string, value string, others ...in
if v, ok := others[5].(time.Time); ok {
cookie.Expires = v
cookie.RawExpires = v.Format(time.UnixDate)
} else if v, ok := others[1].(func(*http.Cookie)); ok {
} else if v, ok := others[5].(func(*http.Cookie)); ok {
v(&cookie)
}
}

View File

@@ -2281,6 +2281,7 @@ auths.allowed_domains_helper = Leave empty to allow all domains. Separate multip
auths.enable_tls = Enable TLS Encryption
auths.skip_tls_verify = Skip TLS Verify
auths.pam_service_name = PAM Service Name
auths.pam_email_domain = PAM Email Domain (optional)
auths.oauth2_provider = OAuth2 Provider
auths.oauth2_icon_url = Icon URL
auths.oauth2_clientID = Client ID (Key)

View File

@@ -239,6 +239,7 @@ func NewAuthSourcePost(ctx *context.Context) {
case models.LoginPAM:
config = &models.PAMConfig{
ServiceName: form.PAMServiceName,
EmailDomain: form.PAMEmailDomain,
}
case models.LoginOAuth2:
config = parseOAuth2Config(form)
@@ -346,6 +347,7 @@ func EditAuthSourcePost(ctx *context.Context) {
case models.LoginPAM:
config = &models.PAMConfig{
ServiceName: form.PAMServiceName,
EmailDomain: form.PAMEmailDomain,
}
case models.LoginOAuth2:
config = parseOAuth2Config(form)

View File

@@ -46,6 +46,10 @@ func DeleteRepo(ctx *context.Context) {
return
}
if ctx.Repo != nil && ctx.Repo.GitRepo != nil && ctx.Repo.Repository != nil && ctx.Repo.Repository.ID == repo.ID {
ctx.Repo.GitRepo.Close()
}
if err := repo_service.DeleteRepository(ctx.User, repo); err != nil {
ctx.ServerError("DeleteRepository", err)
return

View File

@@ -557,6 +557,7 @@ func Routes() *web.Route {
Gclifetime: setting.SessionConfig.Gclifetime,
Maxlifetime: setting.SessionConfig.Maxlifetime,
Secure: setting.SessionConfig.Secure,
SameSite: setting.SessionConfig.SameSite,
Domain: setting.SessionConfig.Domain,
}))
m.Use(securityHeaders())
@@ -892,7 +893,7 @@ func Routes() *web.Route {
Post(reqToken(), mustNotBeArchived, bind(api.CreatePullRequestOption{}), repo.CreatePullRequest)
m.Group("/{index}", func() {
m.Combo("").Get(repo.GetPullRequest).
Patch(reqToken(), reqRepoWriter(models.UnitTypePullRequests), bind(api.EditPullRequestOption{}), repo.EditPullRequest)
Patch(reqToken(), bind(api.EditPullRequestOption{}), repo.EditPullRequest)
m.Get(".diff", repo.DownloadPullDiff)
m.Get(".patch", repo.DownloadPullPatch)
m.Post("/update", reqToken(), repo.UpdatePullRequest)

View File

@@ -264,7 +264,13 @@ func Edit(ctx *context.APIContext) {
if form.Visibility != "" {
org.Visibility = api.VisibilityModes[form.Visibility]
}
if err := models.UpdateUserCols(org, "full_name", "description", "website", "location", "visibility"); err != nil {
if form.RepoAdminChangeTeamAccess != nil {
org.RepoAdminChangeTeamAccess = *form.RepoAdminChangeTeamAccess
}
if err := models.UpdateUserCols(org,
"full_name", "description", "website", "location",
"visibility", "repo_admin_change_team_access",
); err != nil {
ctx.Error(http.StatusInternalServerError, "EditOrganization", err)
return
}

View File

@@ -6,6 +6,7 @@
package repo
import (
"errors"
"fmt"
"net/http"
@@ -13,7 +14,6 @@ import (
"code.gitea.io/gitea/modules/context"
"code.gitea.io/gitea/modules/convert"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log"
repo_module "code.gitea.io/gitea/modules/repository"
api "code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/web"
@@ -117,62 +117,20 @@ func DeleteBranch(ctx *context.APIContext) {
branchName := ctx.Params("*")
if ctx.Repo.Repository.DefaultBranch == branchName {
ctx.Error(http.StatusForbidden, "DefaultBranch", fmt.Errorf("can not delete default branch"))
return
}
isProtected, err := ctx.Repo.Repository.IsProtectedBranch(branchName, ctx.User)
if err != nil {
ctx.InternalServerError(err)
return
}
if isProtected {
ctx.Error(http.StatusForbidden, "IsProtectedBranch", fmt.Errorf("branch protected"))
return
}
branch, err := repo_module.GetBranch(ctx.Repo.Repository, branchName)
if err != nil {
if git.IsErrBranchNotExist(err) {
if err := repo_service.DeleteBranch(ctx.User, ctx.Repo.Repository, ctx.Repo.GitRepo, branchName); err != nil {
switch {
case git.IsErrBranchNotExist(err):
ctx.NotFound(err)
} else {
ctx.Error(http.StatusInternalServerError, "GetBranch", err)
}
return
}
c, err := branch.GetCommit()
if err != nil {
ctx.Error(http.StatusInternalServerError, "GetCommit", err)
return
}
if err := ctx.Repo.GitRepo.DeleteBranch(branchName, git.DeleteBranchOptions{
Force: true,
}); err != nil {
case errors.Is(err, repo_service.ErrBranchIsDefault):
ctx.Error(http.StatusForbidden, "DefaultBranch", fmt.Errorf("can not delete default branch"))
case errors.Is(err, repo_service.ErrBranchIsProtected):
ctx.Error(http.StatusForbidden, "IsProtectedBranch", fmt.Errorf("branch protected"))
default:
ctx.Error(http.StatusInternalServerError, "DeleteBranch", err)
}
return
}
// Don't return error below this
if err := repo_service.PushUpdate(
&repo_module.PushUpdateOptions{
RefFullName: git.BranchPrefix + branchName,
OldCommitID: c.ID.String(),
NewCommitID: git.EmptySHA,
PusherID: ctx.User.ID,
PusherName: ctx.User.Name,
RepoUserName: ctx.Repo.Owner.Name,
RepoName: ctx.Repo.Repository.Name,
}); err != nil {
log.Error("Update: %v", err)
}
if err := ctx.Repo.Repository.AddDeletedBranch(branchName, c.ID.String(), ctx.User.ID); err != nil {
log.Warn("AddDeletedBranch: %v", err)
}
ctx.Status(http.StatusNoContent)
}

View File

@@ -885,6 +885,10 @@ func Delete(ctx *context.APIContext) {
return
}
if ctx.Repo.GitRepo != nil {
ctx.Repo.GitRepo.Close()
}
if err := repo_service.DeleteRepository(ctx.User, repo); err != nil {
ctx.Error(http.StatusInternalServerError, "DeleteRepository", err)
return

View File

@@ -55,7 +55,7 @@ func parseTime(value string) (int64, error) {
// prepareQueryArg unescape and trim a query arg
func prepareQueryArg(ctx *context.APIContext, name string) (value string, err error) {
value, err = url.PathUnescape(ctx.Query(name))
value = strings.Trim(value, " ")
value = strings.TrimSpace(value)
return
}

View File

@@ -22,6 +22,7 @@ import (
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/templates"
"code.gitea.io/gitea/modules/translation"
"code.gitea.io/gitea/modules/user"
"code.gitea.io/gitea/modules/util"
"code.gitea.io/gitea/modules/web"
@@ -61,6 +62,8 @@ func InstallInit(next http.Handler) http.Handler {
"DbOptions": setting.SupportedDatabases,
"i18n": locale,
"Language": locale.Language(),
"Lang": locale.Language(),
"AllLangs": translation.AllLangs(),
"CurrentURL": setting.AppSubURL + req.URL.RequestURI(),
"PageStartTime": startTime,
"TmplLoadTimes": func() string {
@@ -69,6 +72,12 @@ func InstallInit(next http.Handler) http.Handler {
"PasswordHashAlgorithms": models.AvailableHashAlgorithms,
},
}
for _, lang := range translation.AllLangs() {
if lang.Lang == locale.Language() {
ctx.Data["LangName"] = lang.Name
break
}
}
ctx.Req = context.WithContext(req, &ctx)
next.ServeHTTP(resp, ctx.Req)
})

View File

@@ -51,6 +51,7 @@ func SettingsPost(ctx *context.Context) {
}
org := ctx.Org.Organization
nameChanged := org.Name != form.Name
// Check if organization name has been changed.
if org.LowerName != strings.ToLower(form.Name) {
@@ -74,7 +75,9 @@ func SettingsPost(ctx *context.Context) {
// reset ctx.org.OrgLink with new name
ctx.Org.OrgLink = setting.AppSubURL + "/org/" + form.Name
log.Trace("Organization name changed: %s -> %s", org.Name, form.Name)
nameChanged = false
}
// In case it's just a case change.
org.Name = form.Name
org.LowerName = strings.ToLower(form.Name)
@@ -104,11 +107,17 @@ func SettingsPost(ctx *context.Context) {
return
}
for _, repo := range org.Repos {
repo.OwnerName = org.Name
if err := models.UpdateRepository(repo, true); err != nil {
ctx.ServerError("UpdateRepository", err)
return
}
}
} else if nameChanged {
if err := models.UpdateRepositoryOwnerNames(org.ID, org.Name); err != nil {
ctx.ServerError("UpdateRepository", err)
return
}
}
log.Trace("Organization setting updated: %s", org.Name)

View File

@@ -69,6 +69,7 @@ func Routes() *web.Route {
r.Post("/manager/add-logger", bind(private.LoggerOptions{}), AddLogger)
r.Post("/manager/remove-logger/{group}/{name}", RemoveLogger)
r.Post("/mail/send", SendEmail)
r.Post("/restore_repo", RestoreRepo)
return r
}

View File

@@ -0,0 +1,51 @@
// Copyright 2021 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package private
import (
"io/ioutil"
myCtx "code.gitea.io/gitea/modules/context"
"code.gitea.io/gitea/modules/migrations"
jsoniter "github.com/json-iterator/go"
)
// RestoreRepo restore a repository from data
func RestoreRepo(ctx *myCtx.PrivateContext) {
json := jsoniter.ConfigCompatibleWithStandardLibrary
bs, err := ioutil.ReadAll(ctx.Req.Body)
if err != nil {
ctx.JSON(500, map[string]string{
"err": err.Error(),
})
return
}
var params = struct {
RepoDir string
OwnerName string
RepoName string
Units []string
}{}
if err = json.Unmarshal(bs, &params); err != nil {
ctx.JSON(500, map[string]string{
"err": err.Error(),
})
return
}
if err := migrations.RestoreRepository(
ctx.Req.Context(),
params.RepoDir,
params.OwnerName,
params.RepoName,
params.Units,
); err != nil {
ctx.JSON(500, map[string]string{
"err": err.Error(),
})
} else {
ctx.Status(200)
}
}

View File

@@ -6,6 +6,7 @@
package repo
import (
"errors"
"fmt"
"strings"
@@ -82,34 +83,23 @@ func Branches(ctx *context.Context) {
func DeleteBranchPost(ctx *context.Context) {
defer redirect(ctx)
branchName := ctx.Query("name")
if branchName == ctx.Repo.Repository.DefaultBranch {
log.Debug("DeleteBranch: Can't delete default branch '%s'", branchName)
ctx.Flash.Error(ctx.Tr("repo.branch.default_deletion_failed", branchName))
return
}
isProtected, err := ctx.Repo.Repository.IsProtectedBranch(branchName, ctx.User)
if err != nil {
log.Error("DeleteBranch: %v", err)
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", branchName))
return
}
if isProtected {
log.Debug("DeleteBranch: Can't delete protected branch '%s'", branchName)
ctx.Flash.Error(ctx.Tr("repo.branch.protected_deletion_failed", branchName))
return
}
if !ctx.Repo.GitRepo.IsBranchExist(branchName) {
if err := repo_service.DeleteBranch(ctx.User, ctx.Repo.Repository, ctx.Repo.GitRepo, branchName); err != nil {
switch {
case git.IsErrBranchNotExist(err):
log.Debug("DeleteBranch: Can't delete non existing branch '%s'", branchName)
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", branchName))
return
}
if err := deleteBranch(ctx, branchName); err != nil {
case errors.Is(err, repo_service.ErrBranchIsDefault):
log.Debug("DeleteBranch: Can't delete default branch '%s'", branchName)
ctx.Flash.Error(ctx.Tr("repo.branch.default_deletion_failed", branchName))
case errors.Is(err, repo_service.ErrBranchIsProtected):
log.Debug("DeleteBranch: Can't delete protected branch '%s'", branchName)
ctx.Flash.Error(ctx.Tr("repo.branch.protected_deletion_failed", branchName))
default:
log.Error("DeleteBranch: %v", err)
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", branchName))
}
return
}
@@ -168,41 +158,6 @@ func redirect(ctx *context.Context) {
})
}
func deleteBranch(ctx *context.Context, branchName string) error {
commit, err := ctx.Repo.GitRepo.GetBranchCommit(branchName)
if err != nil {
log.Error("GetBranchCommit: %v", err)
return err
}
if err := ctx.Repo.GitRepo.DeleteBranch(branchName, git.DeleteBranchOptions{
Force: true,
}); err != nil {
log.Error("DeleteBranch: %v", err)
return err
}
// Don't return error below this
if err := repo_service.PushUpdate(
&repo_module.PushUpdateOptions{
RefFullName: git.BranchPrefix + branchName,
OldCommitID: commit.ID.String(),
NewCommitID: git.EmptySHA,
PusherID: ctx.User.ID,
PusherName: ctx.User.Name,
RepoUserName: ctx.Repo.Owner.Name,
RepoName: ctx.Repo.Repository.Name,
}); err != nil {
log.Error("Update: %v", err)
}
if err := ctx.Repo.Repository.AddDeletedBranch(branchName, commit.ID.String(), ctx.User.ID); err != nil {
log.Warn("AddDeletedBranch: %v", err)
}
return nil
}
// loadBranches loads branches from the repository limited by page & pageSize.
// NOTE: May write to context on error.
func loadBranches(ctx *context.Context, skip, limit int) ([]*Branch, int) {

View File

@@ -447,7 +447,26 @@ func (h *serviceHandler) setHeaderCacheForever() {
h.w.Header().Set("Cache-Control", "public, max-age=31536000")
}
func containsParentDirectorySeparator(v string) bool {
if !strings.Contains(v, "..") {
return false
}
for _, ent := range strings.FieldsFunc(v, isSlashRune) {
if ent == ".." {
return true
}
}
return false
}
func isSlashRune(r rune) bool { return r == '/' || r == '\\' }
func (h *serviceHandler) sendFile(contentType, file string) {
if containsParentDirectorySeparator(file) {
log.Error("request file path contains invalid path: %v", file)
h.w.WriteHeader(http.StatusBadRequest)
return
}
reqFile := path.Join(h.dir, file)
fi, err := os.Stat(reqFile)

43
routers/repo/http_test.go Normal file
View File

@@ -0,0 +1,43 @@
// Copyright 2021 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package repo
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestContainsParentDirectorySeparator(t *testing.T) {
tests := []struct {
v string
b bool
}{
{
v: `user2/repo1/info/refs`,
b: false,
},
{
v: `user2/repo1/HEAD`,
b: false,
},
{
v: `user2/repo1/some.../strange_file...mp3`,
b: false,
},
{
v: `user2/repo1/../../custom/conf/app.ini`,
b: true,
},
{
v: `user2/repo1/objects/info/..\..\..\..\custom\conf\app.ini`,
b: true,
},
}
for i := range tests {
assert.EqualValues(t, tests[i].b, containsParentDirectorySeparator(tests[i].v))
}
}

View File

@@ -9,6 +9,7 @@ package repo
import (
"container/list"
"crypto/subtle"
"errors"
"fmt"
"net/http"
"path"
@@ -22,7 +23,6 @@ import (
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/notification"
repo_module "code.gitea.io/gitea/modules/repository"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/upload"
@@ -1186,20 +1186,6 @@ func CleanUpPullRequest(ctx *context.Context) {
})
}()
if pr.HeadBranch == pr.HeadRepo.DefaultBranch || !gitRepo.IsBranchExist(pr.HeadBranch) {
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
return
}
// Check if branch is not protected
if protected, err := pr.HeadRepo.IsProtectedBranch(pr.HeadBranch, ctx.User); err != nil || protected {
if err != nil {
log.Error("HeadRepo.IsProtectedBranch: %v", err)
}
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
return
}
// Check if branch has no new commits
headCommitID, err := gitBaseRepo.GetRefCommitID(pr.GetGitRefName())
if err != nil {
@@ -1218,25 +1204,19 @@ func CleanUpPullRequest(ctx *context.Context) {
return
}
if err := gitRepo.DeleteBranch(pr.HeadBranch, git.DeleteBranchOptions{
Force: true,
}); err != nil {
if err := repo_service.DeleteBranch(ctx.User, pr.HeadRepo, gitRepo, pr.HeadBranch); err != nil {
switch {
case git.IsErrBranchNotExist(err):
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
case errors.Is(err, repo_service.ErrBranchIsDefault):
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
case errors.Is(err, repo_service.ErrBranchIsProtected):
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
default:
log.Error("DeleteBranch: %v", err)
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
return
}
if err := repo_service.PushUpdate(
&repo_module.PushUpdateOptions{
RefFullName: git.BranchPrefix + pr.HeadBranch,
OldCommitID: branchCommitID,
NewCommitID: git.EmptySHA,
PusherID: ctx.User.ID,
PusherName: ctx.User.Name,
RepoUserName: pr.HeadRepo.Owner.Name,
RepoName: pr.HeadRepo.Name,
}); err != nil {
log.Error("Update: %v", err)
return
}
if err := models.AddDeletePRBranchComment(ctx.User, pr.BaseRepo, issue.ID, pr.HeadBranch); err != nil {

View File

@@ -539,6 +539,11 @@ func SettingsPost(ctx *context.Context) {
return
}
// Close the gitrepository before doing this.
if ctx.Repo.GitRepo != nil {
ctx.Repo.GitRepo.Close()
}
if err := repo_service.DeleteRepository(ctx.User, ctx.Repo.Repository); err != nil {
ctx.ServerError("DeleteRepository", err)
return

86
routers/routes/goget.go Normal file
View File

@@ -0,0 +1,86 @@
// Copyright 2021 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package routes
import (
"net/http"
"net/url"
"path"
"strings"
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/context"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/util"
"github.com/unknwon/com"
)
func goGet(ctx *context.Context) {
if ctx.Req.Method != "GET" || ctx.Query("go-get") != "1" || len(ctx.Req.URL.Query()) > 1 {
return
}
parts := strings.SplitN(ctx.Req.URL.EscapedPath(), "/", 4)
if len(parts) < 3 {
return
}
ownerName := parts[1]
repoName := parts[2]
// Quick responses appropriate go-get meta with status 200
// regardless of if user have access to the repository,
// or the repository does not exist at all.
// This is particular a workaround for "go get" command which does not respect
// .netrc file.
trimmedRepoName := strings.TrimSuffix(repoName, ".git")
if ownerName == "" || trimmedRepoName == "" {
_, _ = ctx.Write([]byte(`<!doctype html>
<html>
<body>
invalid import path
</body>
</html>
`))
ctx.Status(400)
return
}
branchName := setting.Repository.DefaultBranch
repo, err := models.GetRepositoryByOwnerAndName(ownerName, repoName)
if err == nil && len(repo.DefaultBranch) > 0 {
branchName = repo.DefaultBranch
}
prefix := setting.AppURL + path.Join(url.PathEscape(ownerName), url.PathEscape(repoName), "src", "branch", util.PathEscapeSegments(branchName))
appURL, _ := url.Parse(setting.AppURL)
insecure := ""
if appURL.Scheme == string(setting.HTTP) {
insecure = "--insecure "
}
ctx.Header().Set("Content-Type", "text/html")
ctx.Status(http.StatusOK)
_, _ = ctx.Write([]byte(com.Expand(`<!doctype html>
<html>
<head>
<meta name="go-import" content="{GoGetImport} git {CloneLink}">
<meta name="go-source" content="{GoGetImport} _ {GoDocDirectory} {GoDocFile}">
</head>
<body>
go get {Insecure}{GoGetImport}
</body>
</html>
`, map[string]string{
"GoGetImport": context.ComposeGoGetImport(ownerName, trimmedRepoName),
"CloneLink": models.ComposeHTTPSCloneURL(ownerName, repoName),
"GoDocDirectory": prefix + "{/dir}",
"GoDocFile": prefix + "{/dir}/{file}#L{line}",
"Insecure": insecure,
})))
}

View File

@@ -89,6 +89,7 @@ func InstallRoutes() *web.Route {
Gclifetime: setting.SessionConfig.Gclifetime,
Maxlifetime: setting.SessionConfig.Maxlifetime,
Secure: setting.SessionConfig.Secure,
SameSite: setting.SessionConfig.SameSite,
Domain: setting.SessionConfig.Domain,
}))
@@ -110,7 +111,7 @@ func InstallRoutes() *web.Route {
r.Get("/", routers.Install)
r.Post("/", web.Bind(forms.InstallForm{}), routers.InstallPost)
r.NotFound(func(w http.ResponseWriter, req *http.Request) {
http.Redirect(w, req, setting.AppURL, 302)
http.Redirect(w, req, setting.AppURL, http.StatusFound)
})
return r
}

View File

@@ -8,7 +8,6 @@ import (
"encoding/gob"
"fmt"
"net/http"
"net/url"
"os"
"path"
"strings"
@@ -24,7 +23,6 @@ import (
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/storage"
"code.gitea.io/gitea/modules/templates"
"code.gitea.io/gitea/modules/util"
"code.gitea.io/gitea/modules/validation"
"code.gitea.io/gitea/modules/web"
"code.gitea.io/gitea/routers"
@@ -51,7 +49,6 @@ import (
"github.com/go-chi/cors"
"github.com/prometheus/client_golang/prometheus"
"github.com/tstranex/u2f"
"github.com/unknwon/com"
)
const (
@@ -138,6 +135,7 @@ func WebRoutes() *web.Route {
Gclifetime: setting.SessionConfig.Gclifetime,
Maxlifetime: setting.SessionConfig.Maxlifetime,
Secure: setting.SessionConfig.Secure,
SameSite: setting.SessionConfig.SameSite,
Domain: setting.SessionConfig.Domain,
}))
@@ -190,6 +188,7 @@ func WebRoutes() *web.Route {
ctx.Data["UnitPullsGlobalDisabled"] = models.UnitTypePullRequests.UnitGlobalDisabled()
ctx.Data["UnitProjectsGlobalDisabled"] = models.UnitTypeProjects.UnitGlobalDisabled()
})
r.Use(goGet)
// for health check
r.Head("/", func(w http.ResponseWriter, req *http.Request) {
@@ -229,67 +228,6 @@ func WebRoutes() *web.Route {
return r
}
func goGet(ctx *context.Context) {
if ctx.Query("go-get") != "1" {
return
}
// Quick responses appropriate go-get meta with status 200
// regardless of if user have access to the repository,
// or the repository does not exist at all.
// This is particular a workaround for "go get" command which does not respect
// .netrc file.
ownerName := ctx.Params(":username")
repoName := ctx.Params(":reponame")
trimmedRepoName := strings.TrimSuffix(repoName, ".git")
if ownerName == "" || trimmedRepoName == "" {
_, _ = ctx.Write([]byte(`<!doctype html>
<html>
<body>
invalid import path
</body>
</html>
`))
ctx.Status(400)
return
}
branchName := setting.Repository.DefaultBranch
repo, err := models.GetRepositoryByOwnerAndName(ownerName, repoName)
if err == nil && len(repo.DefaultBranch) > 0 {
branchName = repo.DefaultBranch
}
prefix := setting.AppURL + path.Join(url.PathEscape(ownerName), url.PathEscape(repoName), "src", "branch", util.PathEscapeSegments(branchName))
appURL, _ := url.Parse(setting.AppURL)
insecure := ""
if appURL.Scheme == string(setting.HTTP) {
insecure = "--insecure "
}
ctx.Header().Set("Content-Type", "text/html")
ctx.Status(http.StatusOK)
_, _ = ctx.Write([]byte(com.Expand(`<!doctype html>
<html>
<head>
<meta name="go-import" content="{GoGetImport} git {CloneLink}">
<meta name="go-source" content="{GoGetImport} _ {GoDocDirectory} {GoDocFile}">
</head>
<body>
go get {Insecure}{GoGetImport}
</body>
</html>
`, map[string]string{
"GoGetImport": context.ComposeGoGetImport(ownerName, trimmedRepoName),
"CloneLink": models.ComposeHTTPSCloneURL(ownerName, repoName),
"GoDocDirectory": prefix + "{/dir}",
"GoDocFile": prefix + "{/dir}/{file}#L{line}",
"Insecure": insecure,
})))
}
// RegisterRoutes register routes
func RegisterRoutes(m *web.Route) {
reqSignIn := context.Toggle(&context.ToggleOptions{SignInRequired: true})
@@ -1091,7 +1029,7 @@ func RegisterRoutes(m *web.Route) {
m.Group("/{username}", func() {
m.Group("/{reponame}", func() {
m.Get("", repo.SetEditorconfigIfExists, repo.Home)
}, goGet, ignSignIn, context.RepoAssignment, context.RepoRef(), context.UnitTypes())
}, ignSignIn, context.RepoAssignment, context.RepoRef(), context.UnitTypes())
m.Group("/{reponame}", func() {
m.Group("/info/lfs", func() {

View File

@@ -67,8 +67,13 @@ func HandleUsernameChange(ctx *context.Context, user *models.User, newName strin
}
return err
}
log.Trace("User name changed: %s -> %s", user.Name, newName)
} else {
if err := models.UpdateRepositoryOwnerNames(user.ID, newName); err != nil {
ctx.ServerError("UpdateRepository", err)
return err
}
}
log.Trace("User name changed: %s -> %s", user.Name, newName)
return nil
}
@@ -84,6 +89,7 @@ func ProfilePost(ctx *context.Context) {
}
if len(form.Name) != 0 && ctx.User.Name != form.Name {
log.Debug("Changing name for %s to %s", ctx.User.Name, form.Name)
if err := HandleUsernameChange(ctx, ctx.User, form.Name); err != nil {
ctx.Redirect(setting.AppSubURL + "/user/settings")
return

View File

@@ -20,12 +20,16 @@ func mailParticipantsComment(c *models.Comment, opType models.ActionType, issue
for i, u := range mentions {
mentionedIDs[i] = u.ID
}
content := c.Content
if c.Type == models.CommentTypePullPush {
content = ""
}
if err = mailIssueCommentToParticipants(
&mailCommentContext{
Issue: issue,
Doer: c.Poster,
ActionType: opType,
Content: c.Content,
Content: content,
Comment: c,
}, mentionedIDs); err != nil {
log.Error("mailIssueCommentToParticipants: %v", err)

View File

@@ -158,12 +158,18 @@ func mailParticipants(issue *models.Issue, doer *models.User, opType models.Acti
for i, u := range mentions {
mentionedIDs[i] = u.ID
}
content := issue.Content
if opType == models.ActionCloseIssue || opType == models.ActionClosePullRequest ||
opType == models.ActionReopenIssue || opType == models.ActionReopenPullRequest ||
opType == models.ActionMergePullRequest {
content = ""
}
if err = mailIssueCommentToParticipants(
&mailCommentContext{
Issue: issue,
Doer: doer,
ActionType: opType,
Content: issue.Content,
Content: content,
Comment: nil,
}, mentionedIDs); err != nil {
log.Error("mailIssueCommentToParticipants: %v", err)

View File

@@ -0,0 +1,72 @@
// Copyright 2021 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package repository
import (
"errors"
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log"
repo_module "code.gitea.io/gitea/modules/repository"
pull_service "code.gitea.io/gitea/services/pull"
)
// enmuerates all branch related errors
var (
ErrBranchIsDefault = errors.New("branch is default")
ErrBranchIsProtected = errors.New("branch is protected")
)
// DeleteBranch delete branch
func DeleteBranch(doer *models.User, repo *models.Repository, gitRepo *git.Repository, branchName string) error {
if branchName == repo.DefaultBranch {
return ErrBranchIsDefault
}
isProtected, err := repo.IsProtectedBranch(branchName, doer)
if err != nil {
return err
}
if isProtected {
return ErrBranchIsProtected
}
commit, err := gitRepo.GetBranchCommit(branchName)
if err != nil {
return err
}
if err := gitRepo.DeleteBranch(branchName, git.DeleteBranchOptions{
Force: true,
}); err != nil {
return err
}
if err := pull_service.CloseBranchPulls(doer, repo.ID, branchName); err != nil {
return err
}
// Don't return error below this
if err := PushUpdate(
&repo_module.PushUpdateOptions{
RefFullName: git.BranchPrefix + branchName,
OldCommitID: commit.ID.String(),
NewCommitID: git.EmptySHA,
PusherID: doer.ID,
PusherName: doer.Name,
RepoUserName: repo.OwnerName,
RepoName: repo.Name,
}); err != nil {
log.Error("Update: %v", err)
}
if err := repo.AddDeletedBranch(branchName, commit.ID.String(), doer.ID); err != nil {
log.Warn("AddDeletedBranch: %v", err)
}
return nil
}

View File

@@ -25,14 +25,14 @@ environment:
apps:
gitea:
command: gitea
plugs: [network, network-bind]
plugs: [network, network-bind, removable-media]
web:
command: gitea web
daemon: simple
plugs: [network, network-bind]
plugs: [network, network-bind, removable-media]
dump:
command: gitea dump
plugs: [home]
plugs: [home, removable-media]
version:
command: gitea --version
sqlite:

View File

@@ -188,6 +188,10 @@
<label for="pam_service_name">{{.i18n.Tr "admin.auths.pam_service_name"}}</label>
<input id="pam_service_name" name="pam_service_name" value="{{$cfg.ServiceName}}" required>
</div>
<div class="field">
<label for="pam_email_domain">{{.i18n.Tr "admin.auths.pam_email_domain"}}</label>
<input id="pam_email_domain" name="pam_email_domain" value="{{$cfg.EmailDomain}}">
</div>
{{end}}
<!-- OAuth2 -->

View File

@@ -38,6 +38,8 @@
<div class="pam required field {{if not (eq .type 4)}}hide{{end}}">
<label for="pam_service_name">{{.i18n.Tr "admin.auths.pam_service_name"}}</label>
<input id="pam_service_name" name="pam_service_name" value="{{.pam_service_name}}" />
<label for="pam_email_domain">{{.i18n.Tr "admin.auths.pam_email_domain"}}</label>
<input id="pam_email_domain" name="pam_email_domain" value="{{.pam_email_domain}}">
</div>
<!-- OAuth2 -->

View File

@@ -19,7 +19,7 @@
<div class="menu transition" :class="{visible: menuVisible}" v-if="menuVisible" v-cloak>
<div class="ui icon search input">
<i class="icon df ac jc m-0">{{svg "octicon-filter" 16}}</i>
<input name="search" ref="searchField" v-model="searchTerm" @keydown="keydown($event)" placeholder="{{.i18n.Tr "repo.filter_branch_and_tag"}}...">
<input name="search" ref="searchField" autocomplete="off" v-model="searchTerm" @keydown="keydown($event)" placeholder="{{.i18n.Tr "repo.filter_branch_and_tag"}}...">
</div>
<div class="header branch-tag-choice">
<div class="ui grid">

View File

@@ -2,14 +2,9 @@
<div class="page-content repository">
{{template "repo/header" .}}
<div class="ui container">
<div class="ui three column stackable grid">
<div class="ui two column stackable grid">
<div class="column">
<h1>{{.Milestone.Name}}</h1>
<div class="markdown content">
{{.Milestone.RenderedContent|Str2html}}
</div>
</div>
<div class="column center aligned">
</div>
{{if not .Repository.IsArchived}}
<div class="column right aligned">
@@ -20,6 +15,11 @@
</div>
{{end}}
</div>
<div class="ui one column stackable grid">
<div class="column markup content">
{{.Milestone.RenderedContent|Str2html}}
</div>
</div>
<div class="ui one column stackable grid">
<div class="column">
{{ $closedDate:= TimeSinceUnix .Milestone.ClosedDateUnix $.Lang }}

View File

@@ -6,9 +6,6 @@
<div class="column">
{{template "repo/issue/navbar" .}}
</div>
<div class="column center aligned">
{{template "repo/issue/search" .}}
</div>
<div class="column right aligned">
{{if and .CanWriteProjects (not .Repository.IsArchived) .PageIsProjects}}
<a class="ui green button show-modal item" data-modal="#new-board-item">{{.i18n.Tr "new_project_board"}}</a>

View File

@@ -67,7 +67,7 @@
</a>
{{end}}
{{if .Ref}}
<a class="ref" {{if $.RepoLink}}href="{{$.RepoLink}}{{index $.IssueRefURLs .ID}}"{{else}}href="{{AppSubUrl}}/{{.Repo.OwnerName}}/{{.Repo.Name}}{{index $.IssueRefURLs .ID}}"{{end}}>
<a class="ref" {{if $.RepoLink}}href="{{index $.IssueRefURLs .ID}}"{{else}}href="{{AppSubUrl}}/{{.Repo.OwnerName}}/{{.Repo.Name}}{{index $.IssueRefURLs .ID}}"{{end}}>
{{svg "octicon-git-branch" 14 "mr-2"}}{{index $.IssueRefEndNames .ID}}
</a>
{{end}}

View File

@@ -123,7 +123,7 @@ type Render struct {
// Customize Secure with an Options struct.
opt Options
templates *template.Template
templatesLk sync.Mutex
templatesLk sync.RWMutex
compiledCharset string
}
@@ -196,8 +196,8 @@ func (r *Render) compileTemplates() {
func (r *Render) compileTemplatesFromDir() {
dir := r.opt.Directory
r.templates = template.New(dir)
r.templates.Delims(r.opt.Delims.Left, r.opt.Delims.Right)
tmpTemplates := template.New(dir)
tmpTemplates.Delims(r.opt.Delims.Left, r.opt.Delims.Right)
// Walk the supplied directory and compile any files that match our extension list.
r.opt.FileSystem.Walk(dir, func(path string, info os.FileInfo, err error) error {
@@ -227,7 +227,7 @@ func (r *Render) compileTemplatesFromDir() {
}
name := (rel[0 : len(rel)-len(ext)])
tmpl := r.templates.New(filepath.ToSlash(name))
tmpl := tmpTemplates.New(filepath.ToSlash(name))
// Add our funcmaps.
for _, funcs := range r.opt.Funcs {
@@ -241,12 +241,16 @@ func (r *Render) compileTemplatesFromDir() {
}
return nil
})
r.templatesLk.Lock()
r.templates = tmpTemplates
r.templatesLk.Unlock()
}
func (r *Render) compileTemplatesFromAsset() {
dir := r.opt.Directory
r.templates = template.New(dir)
r.templates.Delims(r.opt.Delims.Left, r.opt.Delims.Right)
tmpTemplates := template.New(dir)
tmpTemplates.Delims(r.opt.Delims.Left, r.opt.Delims.Right)
for _, path := range r.opt.AssetNames() {
if !strings.HasPrefix(path, dir) {
@@ -272,7 +276,7 @@ func (r *Render) compileTemplatesFromAsset() {
}
name := (rel[0 : len(rel)-len(ext)])
tmpl := r.templates.New(filepath.ToSlash(name))
tmpl := tmpTemplates.New(filepath.ToSlash(name))
// Add our funcmaps.
for _, funcs := range r.opt.Funcs {
@@ -285,6 +289,10 @@ func (r *Render) compileTemplatesFromAsset() {
}
}
}
r.templatesLk.Lock()
r.templates = tmpTemplates
r.templatesLk.Unlock()
}
// TemplateLookup is a wrapper around template.Lookup and returns
@@ -389,14 +397,15 @@ func (r *Render) Data(w io.Writer, status int, v []byte) error {
// HTML builds up the response from the specified template and bindings.
func (r *Render) HTML(w io.Writer, status int, name string, binding interface{}, htmlOpt ...HTMLOptions) error {
r.templatesLk.Lock()
defer r.templatesLk.Unlock()
// If we are in development mode, recompile the templates on every HTML request.
if r.opt.IsDevelopment {
r.compileTemplates()
}
r.templatesLk.RLock()
defer r.templatesLk.RUnlock()
opt := r.prepareHTMLOptions(htmlOpt)
if tpl := r.templates.Lookup(name); tpl != nil {
if len(opt.Layout) > 0 {

4
vendor/modules.txt vendored
View File

@@ -777,7 +777,7 @@ github.com/unknwon/i18n
# github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae
## explicit
github.com/unknwon/paginater
# github.com/unrolled/render v1.1.0
# github.com/unrolled/render v1.1.1
## explicit
github.com/unrolled/render
# github.com/urfave/cli v1.22.5
@@ -1048,7 +1048,7 @@ strk.kbt.io/projects/go/libravatar
# xorm.io/builder v0.3.9
## explicit
xorm.io/builder
# xorm.io/xorm v1.0.7
# xorm.io/xorm v1.1.0
## explicit
xorm.io/xorm
xorm.io/xorm/caches

786
vendor/xorm.io/xorm/.drone.yml generated vendored
View File

@@ -2,58 +2,288 @@
kind: pipeline
name: testing
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: always
settings:
backend: "filesystem"
restore: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
- name: test-vet
image: golang:1.11 # The lowest golang requirement
image: golang:1.15
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
commands:
- make vet
- make test
- make fmt-check
volumes:
- name: cache
path: /go
when:
event:
- push
- pull_request
- name: test-sqlite
image: golang:1.12
- name: rebuild-cache
image: meltwater/drone-cache
pull: true
settings:
backend: "filesystem"
rebuild: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
volumes:
- name: cache
temp: {}
---
kind: pipeline
name: test-sqlite
depends_on:
- testing
steps:
- name: restore-cache
image: meltwater/drone-cache:dev
pull: always
settings:
backend: "filesystem"
restore: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
- name: test-sqlite3
image: golang:1.15
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
commands:
- make test-sqlite3
- TEST_CACHE_ENABLE=true make test-sqlite3
- TEST_QUOTE_POLICY=reserved make test-sqlite3
volumes:
- name: cache
path: /go
- name: test-sqlite
image: golang:1.15
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
commands:
- make test-sqlite
- TEST_CACHE_ENABLE=true make test-sqlite
- TEST_QUOTE_POLICY=reserved make test-sqlite
when:
event:
- push
- pull_request
volumes:
- name: cache
path: /go
- name: rebuild-cache
image: meltwater/drone-cache:dev
pull: true
settings:
backend: "filesystem"
rebuild: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
volumes:
- name: cache
temp: {}
---
kind: pipeline
name: test-mysql
depends_on:
- testing
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: always
settings:
backend: "filesystem"
restore: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
- name: test-mysql
image: golang:1.12
image: golang:1.15
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
TEST_MYSQL_HOST: mysql
TEST_MYSQL_CHARSET: utf8
TEST_MYSQL_DBNAME: xorm_test
TEST_MYSQL_USERNAME: root
TEST_MYSQL_PASSWORD:
commands:
- make test
- make test-mysql
- TEST_CACHE_ENABLE=true make test-mysql
- TEST_QUOTE_POLICY=reserved make test-mysql
volumes:
- name: cache
path: /go
- name: test-mysql-utf8mb4
image: golang:1.15
depends_on:
- test-mysql
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
TEST_MYSQL_HOST: mysql
TEST_MYSQL_CHARSET: utf8mb4
TEST_MYSQL_DBNAME: xorm_test
TEST_MYSQL_USERNAME: root
TEST_MYSQL_PASSWORD:
commands:
- make test-mysql
- TEST_CACHE_ENABLE=true make test-mysql
- TEST_QUOTE_POLICY=reserved make test-mysql
when:
event:
- push
- pull_request
volumes:
- name: cache
path: /go
- name: test-mysql8
image: golang:1.12
- name: test-mymysql
pull: default
image: golang:1.15
depends_on:
- test-mysql-utf8mb4
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
TEST_MYSQL_HOST: mysql:3306
TEST_MYSQL_DBNAME: xorm_test
TEST_MYSQL_USERNAME: root
TEST_MYSQL_PASSWORD:
commands:
- make test-mymysql
- TEST_CACHE_ENABLE=true make test-mymysql
- TEST_QUOTE_POLICY=reserved make test-mymysql
volumes:
- name: cache
path: /go
- name: rebuild-cache
image: meltwater/drone-cache
depends_on:
- test-mysql
- test-mysql-utf8mb4
- test-mymysql
pull: true
settings:
backend: "filesystem"
rebuild: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
volumes:
- name: cache
temp: {}
services:
- name: mysql
pull: default
image: mysql:5.7
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: yes
MYSQL_DATABASE: xorm_test
---
kind: pipeline
name: test-mysql8
depends_on:
- test-mysql
- test-sqlite
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: always
settings:
backend: "filesystem"
restore: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
- name: test-mysql8
image: golang:1.15
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
TEST_MYSQL_HOST: mysql8
TEST_MYSQL_CHARSET: utf8mb4
TEST_MYSQL_DBNAME: xorm_test
@@ -63,58 +293,70 @@ steps:
- make test-mysql
- TEST_CACHE_ENABLE=true make test-mysql
- TEST_QUOTE_POLICY=reserved make test-mysql
when:
event:
- push
- pull_request
volumes:
- name: cache
path: /go
- name: test-mysql-utf8mb4
image: golang:1.12
- name: rebuild-cache
image: meltwater/drone-cache:dev
pull: true
depends_on:
- test-mysql
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
TEST_MYSQL_HOST: mysql
TEST_MYSQL_CHARSET: utf8mb4
TEST_MYSQL_DBNAME: xorm_test
TEST_MYSQL_USERNAME: root
TEST_MYSQL_PASSWORD:
commands:
- make test-mysql
- TEST_CACHE_ENABLE=true make test-mysql
- TEST_QUOTE_POLICY=reserved make test-mysql
when:
event:
- push
- pull_request
- test-mysql8
settings:
backend: "filesystem"
rebuild: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
- name: test-mymysql
volumes:
- name: cache
temp: {}
services:
- name: mysql8
pull: default
image: golang:1.12
depends_on:
- test-mysql-utf8mb4
image: mysql:8.0
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
TEST_MYSQL_HOST: mysql:3306
TEST_MYSQL_DBNAME: xorm_test
TEST_MYSQL_USERNAME: root
TEST_MYSQL_PASSWORD:
commands:
- make test-mymysql
- TEST_CACHE_ENABLE=true make test-mymysql
- TEST_QUOTE_POLICY=reserved make test-mymysql
when:
event:
- push
- pull_request
MYSQL_ALLOW_EMPTY_PASSWORD: yes
MYSQL_DATABASE: xorm_test
---
kind: pipeline
name: test-mariadb
depends_on:
- test-mysql8
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: always
settings:
backend: "filesystem"
restore: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
- name: test-mariadb
image: golang:1.12
image: golang:1.15
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
TEST_MYSQL_HOST: mariadb
TEST_MYSQL_CHARSET: utf8mb4
TEST_MYSQL_DBNAME: xorm_test
@@ -124,17 +366,71 @@ steps:
- make test-mysql
- TEST_CACHE_ENABLE=true make test-mysql
- TEST_QUOTE_POLICY=reserved make test-mysql
when:
event:
- push
- pull_request
volumes:
- name: cache
path: /go
- name: rebuild-cache
image: meltwater/drone-cache:dev
depends_on:
- test-mariadb
pull: true
settings:
backend: "filesystem"
rebuild: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
volumes:
- name: cache
temp: {}
services:
- name: mariadb
pull: default
image: mariadb:10.4
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: yes
MYSQL_DATABASE: xorm_test
---
kind: pipeline
name: test-postgres
depends_on:
- test-mariadb
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: always
settings:
backend: "filesystem"
restore: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
- name: test-postgres
pull: default
image: golang:1.12
image: golang:1.15
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
TEST_PGSQL_HOST: pgsql
TEST_PGSQL_DBNAME: xorm_test
TEST_PGSQL_USERNAME: postgres
@@ -143,19 +439,21 @@ steps:
- make test-postgres
- TEST_CACHE_ENABLE=true make test-postgres
- TEST_QUOTE_POLICY=reserved make test-postgres
when:
event:
- push
- pull_request
volumes:
- name: cache
path: /go
- name: test-postgres-schema
pull: default
image: golang:1.12
image: golang:1.15
depends_on:
- test-postgres
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
TEST_PGSQL_HOST: pgsql
TEST_PGSQL_SCHEMA: xorm
TEST_PGSQL_DBNAME: xorm_test
@@ -165,17 +463,72 @@ steps:
- make test-postgres
- TEST_CACHE_ENABLE=true make test-postgres
- TEST_QUOTE_POLICY=reserved make test-postgres
when:
event:
- push
- pull_request
volumes:
- name: cache
path: /go
- name: rebuild-cache
image: meltwater/drone-cache:dev
pull: true
depends_on:
- test-postgres-schema
settings:
backend: "filesystem"
rebuild: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
volumes:
- name: cache
temp: {}
services:
- name: pgsql
pull: default
image: postgres:9.5
environment:
POSTGRES_DB: xorm_test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
---
kind: pipeline
name: test-mssql
depends_on:
- test-postgres
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: always
settings:
backend: "filesystem"
restore: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
- name: test-mssql
pull: default
image: golang:1.12
image: golang:1.15
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
TEST_MSSQL_HOST: mssql
TEST_MSSQL_DBNAME: xorm_test
TEST_MSSQL_USERNAME: sa
@@ -185,17 +538,70 @@ steps:
- TEST_CACHE_ENABLE=true make test-mssql
- TEST_QUOTE_POLICY=reserved make test-mssql
- TEST_MSSQL_DEFAULT_VARCHAR=NVARCHAR TEST_MSSQL_DEFAULT_CHAR=NCHAR make test-mssql
when:
event:
- push
- pull_request
volumes:
- name: cache
path: /go
- name: rebuild-cache
image: meltwater/drone-cache:dev
pull: true
settings:
backend: "filesystem"
rebuild: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
volumes:
- name: cache
temp: {}
services:
- name: mssql
pull: default
image: microsoft/mssql-server-linux:latest
environment:
ACCEPT_EULA: Y
SA_PASSWORD: yourStrong(!)Password
MSSQL_PID: Developer
---
kind: pipeline
name: test-tidb
depends_on:
- test-mssql
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: always
settings:
backend: "filesystem"
restore: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
- name: test-tidb
pull: default
image: golang:1.12
image: golang:1.15
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
TEST_TIDB_HOST: "tidb:4000"
TEST_TIDB_DBNAME: xorm_test
TEST_TIDB_USERNAME: root
@@ -204,17 +610,66 @@ steps:
- make test-tidb
- TEST_CACHE_ENABLE=true make test-tidb
- TEST_QUOTE_POLICY=reserved make test-tidb
when:
event:
- push
- pull_request
volumes:
- name: cache
path: /go
- name: rebuild-cache
image: meltwater/drone-cache:dev
pull: true
settings:
backend: "filesystem"
rebuild: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
volumes:
- name: cache
temp: {}
services:
- name: tidb
pull: default
image: pingcap/tidb:v3.0.3
---
kind: pipeline
name: test-cockroach
depends_on:
- test-tidb
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: always
settings:
backend: "filesystem"
restore: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
- name: test-cockroach
pull: default
image: golang:1.13
image: golang:1.15
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
GOPROXY: "https://goproxy.io"
CGO_ENABLED: 1
GOMODCACHE: '/drone/src/pkg.mod'
GOCACHE: '/drone/src/pkg.build'
TEST_COCKROACH_HOST: "cockroach:26257"
TEST_COCKROACH_DBNAME: xorm_test
TEST_COCKROACH_USERNAME: root
@@ -223,115 +678,62 @@ steps:
- sleep 10
- make test-cockroach
- TEST_CACHE_ENABLE=true make test-cockroach
when:
event:
- push
- pull_request
volumes:
- name: cache
path: /go
- name: merge_coverage
pull: default
image: golang:1.12
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.cn"
depends_on:
- test-vet
- test-sqlite
- test-mysql
- test-mysql8
- test-mymysql
- test-postgres
- test-postgres-schema
- test-mssql
- test-tidb
- test-cockroach
commands:
- make coverage
when:
event:
- push
- pull_request
- name: rebuild-cache
image: meltwater/drone-cache:dev
pull: true
settings:
backend: "filesystem"
rebuild: true
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
archive_format: "gzip"
filesystem_cache_root: "/go"
mount:
- pkg.mod
- pkg.build
volumes:
- name: cache
path: /go
volumes:
- name: cache
temp: {}
services:
- name: mysql
pull: default
image: mysql:5.7
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: yes
MYSQL_DATABASE: xorm_test
when:
event:
- push
- tag
- pull_request
- name: mysql8
pull: default
image: mysql:8.0
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: yes
MYSQL_DATABASE: xorm_test
when:
event:
- push
- tag
- pull_request
- name: mariadb
pull: default
image: mariadb:10.4
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: yes
MYSQL_DATABASE: xorm_test
when:
event:
- push
- tag
- pull_request
- name: pgsql
pull: default
image: postgres:9.5
environment:
POSTGRES_DB: xorm_test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
when:
event:
- push
- tag
- pull_request
- name: mssql
pull: default
image: microsoft/mssql-server-linux:latest
environment:
ACCEPT_EULA: Y
SA_PASSWORD: yourStrong(!)Password
MSSQL_PID: Developer
when:
event:
- push
- tag
- pull_request
- name: tidb
pull: default
image: pingcap/tidb:v3.0.3
when:
event:
- push
- tag
- pull_request
- name: cockroach
pull: default
image: cockroachdb/cockroach:v19.2.4
commands:
- /cockroach/cockroach start --insecure
---
kind: pipeline
name: merge_coverage
depends_on:
- testing
- test-sqlite
- test-mysql
- test-mysql8
- test-mariadb
- test-postgres
- test-mssql
- test-tidb
- test-cockroach
steps:
- name: merge_coverage
pull: default
image: golang:1.15
environment:
GO111MODULE: "on"
GOPROXY: "https://goproxy.io"
commands:
- make coverage
when:
branch:
- master
event:
- push
- tag
- pull_request

1
vendor/xorm.io/xorm/.gitignore generated vendored
View File

@@ -36,3 +36,4 @@ test.db.sql
*coverage.out
test.db
integrations/*.sql
integrations/test_sqlite*

2
vendor/xorm.io/xorm/.revive.toml generated vendored
View File

@@ -15,6 +15,7 @@ warningCode = 1
[rule.if-return]
[rule.increment-decrement]
[rule.var-naming]
arguments = [["ID", "UID", "UUID", "URL", "JSON"], []]
[rule.var-declaration]
[rule.package-comments]
[rule.range]
@@ -23,3 +24,4 @@ warningCode = 1
[rule.unexported-return]
[rule.indent-error-flow]
[rule.errorf]
[rule.struct-tag]

15
vendor/xorm.io/xorm/CHANGELOG.md generated vendored
View File

@@ -3,6 +3,21 @@
This changelog goes through all the changes that have been made in each release
without substantial changes to our git log.
## [1.1.0](https://gitea.com/xorm/xorm/releases/tag/1.1.0) - 2021-05-14
* FEATURES
* Unsigned Support for mysql (#1889)
* Support modernc.org/sqlite (#1850)
* TESTING
* More tests (#1890)
* MISC
* Byte strings in postgres aren't 0x... (#1906)
* Fix another bug with #1872 (#1905)
* Fix two issues with dumptables (#1903)
* Fix comments (#1896)
* Fix comments (#1893)
* MariaDB 10.5 adds a suffix on old datatypes (#1885)
## [1.0.7](https://gitea.com/xorm/xorm/pulls?q=&type=all&state=closed&milestone=1336) - 2021-01-21
* BUGFIXES

29
vendor/xorm.io/xorm/Makefile generated vendored
View File

@@ -6,7 +6,9 @@ GOFMT ?= gofmt -s
TAGS ?=
SED_INPLACE := sed -i
GOFILES := $(shell find . -name "*.go" -type f)
GO_DIRS := caches contexts integrations convert core dialects internal log migrate names schemas tags
GOFILES := $(wildcard *.go)
GOFILES += $(shell find $(GO_DIRS) -name "*.go" -type f)
INTEGRATION_PACKAGES := xorm.io/xorm/integrations
PACKAGES ?= $(filter-out $(INTEGRATION_PACKAGES),$(shell $(GO) list ./...))
@@ -99,7 +101,8 @@ help:
@echo " - test-mysql run integration tests for mysql"
@echo " - test-mssql run integration tests for mssql"
@echo " - test-postgres run integration tests for postgres"
@echo " - test-sqlite run integration tests for sqlite"
@echo " - test-sqlite3 run integration tests for sqlite"
@echo " - test-sqlite run integration tests for pure go sqlite"
@echo " - test-tidb run integration tests for tidb"
@echo " - vet examines Go source code and reports suspicious constructs"
@@ -195,21 +198,37 @@ test-postgres\#%: go-check
-conn_str="postgres://$(TEST_PGSQL_USERNAME):$(TEST_PGSQL_PASSWORD)@$(TEST_PGSQL_HOST)/$(TEST_PGSQL_DBNAME)?sslmode=disable" \
-quote=$(TEST_QUOTE_POLICY) -coverprofile=postgres.$(TEST_QUOTE_POLICY).$(TEST_CACHE_ENABLE).coverage.out -covermode=atomic
.PHONY: test-sqlite3
test-sqlite3: go-check
$(GO) test $(INTEGRATION_PACKAGES) -v -race -cache=$(TEST_CACHE_ENABLE) -db=sqlite3 -conn_str="./test.db?cache=shared&mode=rwc" \
-quote=$(TEST_QUOTE_POLICY) -coverprofile=sqlite3.$(TEST_QUOTE_POLICY).$(TEST_CACHE_ENABLE).coverage.out -covermode=atomic
.PHONY: test-sqlite3-schema
test-sqlite3-schema: go-check
$(GO) test $(INTEGRATION_PACKAGES) -v -race -schema=xorm -cache=$(TEST_CACHE_ENABLE) -db=sqlite3 -conn_str="./test.db?cache=shared&mode=rwc" \
-quote=$(TEST_QUOTE_POLICY) -coverprofile=sqlite3.$(TEST_QUOTE_POLICY).$(TEST_CACHE_ENABLE).coverage.out -covermode=atomic
.PHONY: test-sqlite3\#%
test-sqlite3\#%: go-check
$(GO) test $(INTEGRATION_PACKAGES) -v -race -run $* -cache=$(TEST_CACHE_ENABLE) -db=sqlite3 -conn_str="./test.db?cache=shared&mode=rwc" \
-quote=$(TEST_QUOTE_POLICY) -coverprofile=sqlite3.$(TEST_QUOTE_POLICY).$(TEST_CACHE_ENABLE).coverage.out -covermode=atomic
.PHONY: test-sqlite
test-sqlite: go-check
$(GO) test $(INTEGRATION_PACKAGES) -v -race -cache=$(TEST_CACHE_ENABLE) -db=sqlite3 -conn_str="./test.db?cache=shared&mode=rwc" \
$(GO) test $(INTEGRATION_PACKAGES) -v -race -cache=$(TEST_CACHE_ENABLE) -db=sqlite -conn_str="./test.db?cache=shared&mode=rwc" \
-quote=$(TEST_QUOTE_POLICY) -coverprofile=sqlite.$(TEST_QUOTE_POLICY).$(TEST_CACHE_ENABLE).coverage.out -covermode=atomic
.PHONY: test-sqlite-schema
test-sqlite-schema: go-check
$(GO) test $(INTEGRATION_PACKAGES) -v -race -schema=xorm -cache=$(TEST_CACHE_ENABLE) -db=sqlite3 -conn_str="./test.db?cache=shared&mode=rwc" \
$(GO) test $(INTEGRATION_PACKAGES) -v -race -schema=xorm -cache=$(TEST_CACHE_ENABLE) -db=sqlite -conn_str="./test.db?cache=shared&mode=rwc" \
-quote=$(TEST_QUOTE_POLICY) -coverprofile=sqlite.$(TEST_QUOTE_POLICY).$(TEST_CACHE_ENABLE).coverage.out -covermode=atomic
.PHONY: test-sqlite\#%
test-sqlite\#%: go-check
$(GO) test $(INTEGRATION_PACKAGES) -v -race -run $* -cache=$(TEST_CACHE_ENABLE) -db=sqlite3 -conn_str="./test.db?cache=shared&mode=rwc" \
$(GO) test $(INTEGRATION_PACKAGES) -v -race -run $* -cache=$(TEST_CACHE_ENABLE) -db=sqlite -conn_str="./test.db?cache=shared&mode=rwc" \
-quote=$(TEST_QUOTE_POLICY) -coverprofile=sqlite.$(TEST_QUOTE_POLICY).$(TEST_CACHE_ENABLE).coverage.out -covermode=atomic
.PNONY: test-tidb
test-tidb: go-check
$(GO) test $(INTEGRATION_PACKAGES) -v -race -db=mysql -cache=$(TEST_CACHE_ENABLE) -ignore_select_update=true \

View File

@@ -13,22 +13,26 @@ import (
"io"
)
// md5 hash string
// Md5 return md5 hash string
func Md5(str string) string {
m := md5.New()
io.WriteString(m, str)
return fmt.Sprintf("%x", m.Sum(nil))
}
// Encode Encode data
func Encode(data interface{}) ([]byte, error) {
//return JsonEncode(data)
return GobEncode(data)
}
// Decode decode data
func Decode(data []byte, to interface{}) error {
//return JsonDecode(data, to)
return GobDecode(data, to)
}
// GobEncode encode data with gob
func GobEncode(data interface{}) ([]byte, error) {
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
@@ -39,12 +43,14 @@ func GobEncode(data interface{}) ([]byte, error) {
return buf.Bytes(), nil
}
// GobDecode decode data with gob
func GobDecode(data []byte, to interface{}) error {
buf := bytes.NewBuffer(data)
dec := gob.NewDecoder(buf)
return dec.Decode(to)
}
// JsonEncode encode data with json
func JsonEncode(data interface{}) ([]byte, error) {
val, err := json.Marshal(data)
if err != nil {
@@ -53,6 +59,7 @@ func JsonEncode(data interface{}) ([]byte, error) {
return val, nil
}
// JsonDecode decode data with json
func JsonDecode(data []byte, to interface{}) error {
return json.Unmarshal(data, to)
}

View File

@@ -19,6 +19,7 @@ type LevelDBStore struct {
var _ CacheStore = &LevelDBStore{}
// NewLevelDBStore creates a leveldb store
func NewLevelDBStore(dbfile string) (*LevelDBStore, error) {
db := &LevelDBStore{}
h, err := leveldb.OpenFile(dbfile, nil)
@@ -29,6 +30,7 @@ func NewLevelDBStore(dbfile string) (*LevelDBStore, error) {
return db, nil
}
// Put implements CacheStore
func (s *LevelDBStore) Put(key string, value interface{}) error {
val, err := Encode(value)
if err != nil {
@@ -50,6 +52,7 @@ func (s *LevelDBStore) Put(key string, value interface{}) error {
return err
}
// Get implements CacheStore
func (s *LevelDBStore) Get(key string) (interface{}, error) {
data, err := s.store.Get([]byte(key), nil)
if err != nil {
@@ -75,6 +78,7 @@ func (s *LevelDBStore) Get(key string) (interface{}, error) {
return s.v, err
}
// Del implements CacheStore
func (s *LevelDBStore) Del(key string) error {
err := s.store.Delete([]byte(key), nil)
if err != nil {
@@ -89,6 +93,7 @@ func (s *LevelDBStore) Del(key string) error {
return err
}
// Close implements CacheStore
func (s *LevelDBStore) Close() {
s.store.Close()
}

View File

@@ -6,6 +6,7 @@ package caches
import "sync"
// Manager represents a cache manager
type Manager struct {
cacher Cacher
disableGlobalCache bool
@@ -14,6 +15,7 @@ type Manager struct {
cacherLock sync.RWMutex
}
// NewManager creates a cache manager
func NewManager() *Manager {
return &Manager{
cachers: make(map[string]Cacher),
@@ -27,12 +29,14 @@ func (mgr *Manager) SetDisableGlobalCache(disable bool) {
}
}
// SetCacher set cacher of table
func (mgr *Manager) SetCacher(tableName string, cacher Cacher) {
mgr.cacherLock.Lock()
mgr.cachers[tableName] = cacher
mgr.cacherLock.Unlock()
}
// GetCacher returns a cache of a table
func (mgr *Manager) GetCacher(tableName string) Cacher {
var cacher Cacher
var ok bool

View File

@@ -31,6 +31,7 @@ func NewContextHook(ctx context.Context, sql string, args []interface{}) *Contex
}
}
// End finish the hook invokation
func (c *ContextHook) End(ctx context.Context, result sql.Result, err error) {
c.Ctx = ctx
c.Result = result
@@ -38,19 +39,23 @@ func (c *ContextHook) End(ctx context.Context, result sql.Result, err error) {
c.ExecuteTime = time.Now().Sub(c.start)
}
// Hook represents a hook behaviour
type Hook interface {
BeforeProcess(c *ContextHook) (context.Context, error)
AfterProcess(c *ContextHook) error
}
// Hooks implements Hook interface but contains multiple Hook
type Hooks struct {
hooks []Hook
}
// AddHook adds a Hook
func (h *Hooks) AddHook(hooks ...Hook) {
h.hooks = append(h.hooks, hooks...)
}
// BeforeProcess invoked before execute the process
func (h *Hooks) BeforeProcess(c *ContextHook) (context.Context, error) {
ctx := c.Ctx
for _, h := range h.hooks {
@@ -63,6 +68,7 @@ func (h *Hooks) BeforeProcess(c *ContextHook) (context.Context, error) {
return ctx, nil
}
// AfterProcess invoked after exetue the process
func (h *Hooks) AfterProcess(c *ContextHook) error {
firstErr := c.Err
for _, h := range h.hooks {

15
vendor/xorm.io/xorm/core/db.go generated vendored
View File

@@ -23,6 +23,7 @@ var (
DefaultCacheSize = 200
)
// MapToSlice map query and struct as sql and args
func MapToSlice(query string, mp interface{}) (string, []interface{}, error) {
vv := reflect.ValueOf(mp)
if vv.Kind() != reflect.Ptr || vv.Elem().Kind() != reflect.Map {
@@ -44,6 +45,7 @@ func MapToSlice(query string, mp interface{}) (string, []interface{}, error) {
return query, args, err
}
// StructToSlice converts a query and struct as sql and args
func StructToSlice(query string, st interface{}) (string, []interface{}, error) {
vv := reflect.ValueOf(st)
if vv.Kind() != reflect.Ptr || vv.Elem().Kind() != reflect.Struct {
@@ -176,6 +178,7 @@ func (db *DB) QueryMap(query string, mp interface{}) (*Rows, error) {
return db.QueryMapContext(context.Background(), query, mp)
}
// QueryStructContext query rows with struct
func (db *DB) QueryStructContext(ctx context.Context, query string, st interface{}) (*Rows, error) {
query, args, err := StructToSlice(query, st)
if err != nil {
@@ -184,10 +187,12 @@ func (db *DB) QueryStructContext(ctx context.Context, query string, st interface
return db.QueryContext(ctx, query, args...)
}
// QueryStruct query rows with struct
func (db *DB) QueryStruct(query string, st interface{}) (*Rows, error) {
return db.QueryStructContext(context.Background(), query, st)
}
// QueryRowContext query row with args
func (db *DB) QueryRowContext(ctx context.Context, query string, args ...interface{}) *Row {
rows, err := db.QueryContext(ctx, query, args...)
if err != nil {
@@ -196,10 +201,12 @@ func (db *DB) QueryRowContext(ctx context.Context, query string, args ...interfa
return &Row{rows, nil}
}
// QueryRow query row with args
func (db *DB) QueryRow(query string, args ...interface{}) *Row {
return db.QueryRowContext(context.Background(), query, args...)
}
// QueryRowMapContext query row with map
func (db *DB) QueryRowMapContext(ctx context.Context, query string, mp interface{}) *Row {
query, args, err := MapToSlice(query, mp)
if err != nil {
@@ -208,10 +215,12 @@ func (db *DB) QueryRowMapContext(ctx context.Context, query string, mp interface
return db.QueryRowContext(ctx, query, args...)
}
// QueryRowMap query row with map
func (db *DB) QueryRowMap(query string, mp interface{}) *Row {
return db.QueryRowMapContext(context.Background(), query, mp)
}
// QueryRowStructContext query row with struct
func (db *DB) QueryRowStructContext(ctx context.Context, query string, st interface{}) *Row {
query, args, err := StructToSlice(query, st)
if err != nil {
@@ -220,6 +229,7 @@ func (db *DB) QueryRowStructContext(ctx context.Context, query string, st interf
return db.QueryRowContext(ctx, query, args...)
}
// QueryRowStruct query row with struct
func (db *DB) QueryRowStruct(query string, st interface{}) *Row {
return db.QueryRowStructContext(context.Background(), query, st)
}
@@ -239,10 +249,12 @@ func (db *DB) ExecMapContext(ctx context.Context, query string, mp interface{})
return db.ExecContext(ctx, query, args...)
}
// ExecMap exec query with map
func (db *DB) ExecMap(query string, mp interface{}) (sql.Result, error) {
return db.ExecMapContext(context.Background(), query, mp)
}
// ExecStructContext exec query with map
func (db *DB) ExecStructContext(ctx context.Context, query string, st interface{}) (sql.Result, error) {
query, args, err := StructToSlice(query, st)
if err != nil {
@@ -251,6 +263,7 @@ func (db *DB) ExecStructContext(ctx context.Context, query string, st interface{
return db.ExecContext(ctx, query, args...)
}
// ExecContext exec query with args
func (db *DB) ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error) {
hookCtx := contexts.NewContextHook(ctx, query, args)
ctx, err := db.beforeProcess(hookCtx)
@@ -265,6 +278,7 @@ func (db *DB) ExecContext(ctx context.Context, query string, args ...interface{}
return res, nil
}
// ExecStruct exec query with struct
func (db *DB) ExecStruct(query string, st interface{}) (sql.Result, error) {
return db.ExecStructContext(context.Background(), query, st)
}
@@ -288,6 +302,7 @@ func (db *DB) afterProcess(c *contexts.ContextHook) error {
return err
}
// AddHook adds hook
func (db *DB) AddHook(h ...contexts.Hook) {
db.hooks.AddHook(h...)
}

20
vendor/xorm.io/xorm/core/rows.go generated vendored
View File

@@ -11,11 +11,13 @@ import (
"sync"
)
// Rows represents rows of table
type Rows struct {
*sql.Rows
db *DB
}
// ToMapString returns all records
func (rs *Rows) ToMapString() ([]map[string]string, error) {
cols, err := rs.Columns()
if err != nil {
@@ -34,7 +36,7 @@ func (rs *Rows) ToMapString() ([]map[string]string, error) {
return results, nil
}
// scan data to a struct's pointer according field index
// ScanStructByIndex scan data to a struct's pointer according field index
func (rs *Rows) ScanStructByIndex(dest ...interface{}) error {
if len(dest) == 0 {
return errors.New("at least one struct")
@@ -94,7 +96,7 @@ func fieldByName(v reflect.Value, name string) reflect.Value {
return reflect.Zero(t)
}
// scan data to a struct's pointer according field name
// ScanStructByName scan data to a struct's pointer according field name
func (rs *Rows) ScanStructByName(dest interface{}) error {
vv := reflect.ValueOf(dest)
if vv.Kind() != reflect.Ptr || vv.Elem().Kind() != reflect.Struct {
@@ -120,7 +122,7 @@ func (rs *Rows) ScanStructByName(dest interface{}) error {
return rs.Rows.Scan(newDest...)
}
// scan data to a slice's pointer, slice's length should equal to columns' number
// ScanSlice scan data to a slice's pointer, slice's length should equal to columns' number
func (rs *Rows) ScanSlice(dest interface{}) error {
vv := reflect.ValueOf(dest)
if vv.Kind() != reflect.Ptr || vv.Elem().Kind() != reflect.Slice {
@@ -155,7 +157,7 @@ func (rs *Rows) ScanSlice(dest interface{}) error {
return nil
}
// scan data to a map's pointer
// ScanMap scan data to a map's pointer
func (rs *Rows) ScanMap(dest interface{}) error {
vv := reflect.ValueOf(dest)
if vv.Kind() != reflect.Ptr || vv.Elem().Kind() != reflect.Map {
@@ -187,6 +189,7 @@ func (rs *Rows) ScanMap(dest interface{}) error {
return nil
}
// Row reprents a row of a tab
type Row struct {
rows *Rows
// One of these two will be non-nil:
@@ -205,6 +208,7 @@ func NewRow(rows *Rows, err error) *Row {
return &Row{rows, err}
}
// Columns returns all columns of the row
func (row *Row) Columns() ([]string, error) {
if row.err != nil {
return nil, row.err
@@ -212,6 +216,7 @@ func (row *Row) Columns() ([]string, error) {
return row.rows.Columns()
}
// Scan retrieves all row column values
func (row *Row) Scan(dest ...interface{}) error {
if row.err != nil {
return row.err
@@ -238,6 +243,7 @@ func (row *Row) Scan(dest ...interface{}) error {
return row.rows.Close()
}
// ScanStructByName retrieves all row column values into a struct
func (row *Row) ScanStructByName(dest interface{}) error {
if row.err != nil {
return row.err
@@ -258,6 +264,7 @@ func (row *Row) ScanStructByName(dest interface{}) error {
return row.rows.Close()
}
// ScanStructByIndex retrieves all row column values into a struct
func (row *Row) ScanStructByIndex(dest interface{}) error {
if row.err != nil {
return row.err
@@ -278,7 +285,7 @@ func (row *Row) ScanStructByIndex(dest interface{}) error {
return row.rows.Close()
}
// scan data to a slice's pointer, slice's length should equal to columns' number
// ScanSlice scan data to a slice's pointer, slice's length should equal to columns' number
func (row *Row) ScanSlice(dest interface{}) error {
if row.err != nil {
return row.err
@@ -300,7 +307,7 @@ func (row *Row) ScanSlice(dest interface{}) error {
return row.rows.Close()
}
// scan data to a map's pointer
// ScanMap scan data to a map's pointer
func (row *Row) ScanMap(dest interface{}) error {
if row.err != nil {
return row.err
@@ -322,6 +329,7 @@ func (row *Row) ScanMap(dest interface{}) error {
return row.rows.Close()
}
// ToMapString returns all clumns of this record
func (row *Row) ToMapString() (map[string]string, error) {
cols, err := row.Columns()
if err != nil {

4
vendor/xorm.io/xorm/core/scan.go generated vendored
View File

@@ -10,12 +10,14 @@ import (
"time"
)
// NullTime defines a customize type NullTime
type NullTime time.Time
var (
_ driver.Valuer = NullTime{}
)
// Scan implements driver.Valuer
func (ns *NullTime) Scan(value interface{}) error {
if value == nil {
return nil
@@ -58,9 +60,11 @@ func convertTime(dest *NullTime, src interface{}) error {
return nil
}
// EmptyScanner represents an empty scanner
type EmptyScanner struct {
}
// Scan implements
func (EmptyScanner) Scan(src interface{}) error {
return nil
}

19
vendor/xorm.io/xorm/core/stmt.go generated vendored
View File

@@ -21,6 +21,7 @@ type Stmt struct {
query string
}
// PrepareContext creates a prepare statement
func (db *DB) PrepareContext(ctx context.Context, query string) (*Stmt, error) {
names := make(map[string]int)
var i int
@@ -42,10 +43,12 @@ func (db *DB) PrepareContext(ctx context.Context, query string) (*Stmt, error) {
return &Stmt{stmt, db, names, query}, nil
}
// Prepare creates a prepare statement
func (db *DB) Prepare(query string) (*Stmt, error) {
return db.PrepareContext(context.Background(), query)
}
// ExecMapContext execute with map
func (s *Stmt) ExecMapContext(ctx context.Context, mp interface{}) (sql.Result, error) {
vv := reflect.ValueOf(mp)
if vv.Kind() != reflect.Ptr || vv.Elem().Kind() != reflect.Map {
@@ -59,10 +62,12 @@ func (s *Stmt) ExecMapContext(ctx context.Context, mp interface{}) (sql.Result,
return s.ExecContext(ctx, args...)
}
// ExecMap executes with map
func (s *Stmt) ExecMap(mp interface{}) (sql.Result, error) {
return s.ExecMapContext(context.Background(), mp)
}
// ExecStructContext executes with struct
func (s *Stmt) ExecStructContext(ctx context.Context, st interface{}) (sql.Result, error) {
vv := reflect.ValueOf(st)
if vv.Kind() != reflect.Ptr || vv.Elem().Kind() != reflect.Struct {
@@ -76,10 +81,12 @@ func (s *Stmt) ExecStructContext(ctx context.Context, st interface{}) (sql.Resul
return s.ExecContext(ctx, args...)
}
// ExecStruct executes with struct
func (s *Stmt) ExecStruct(st interface{}) (sql.Result, error) {
return s.ExecStructContext(context.Background(), st)
}
// ExecContext with args
func (s *Stmt) ExecContext(ctx context.Context, args ...interface{}) (sql.Result, error) {
hookCtx := contexts.NewContextHook(ctx, s.query, args)
ctx, err := s.db.beforeProcess(hookCtx)
@@ -94,6 +101,7 @@ func (s *Stmt) ExecContext(ctx context.Context, args ...interface{}) (sql.Result
return res, nil
}
// QueryContext query with args
func (s *Stmt) QueryContext(ctx context.Context, args ...interface{}) (*Rows, error) {
hookCtx := contexts.NewContextHook(ctx, s.query, args)
ctx, err := s.db.beforeProcess(hookCtx)
@@ -108,10 +116,12 @@ func (s *Stmt) QueryContext(ctx context.Context, args ...interface{}) (*Rows, er
return &Rows{rows, s.db}, nil
}
// Query query with args
func (s *Stmt) Query(args ...interface{}) (*Rows, error) {
return s.QueryContext(context.Background(), args...)
}
// QueryMapContext query with map
func (s *Stmt) QueryMapContext(ctx context.Context, mp interface{}) (*Rows, error) {
vv := reflect.ValueOf(mp)
if vv.Kind() != reflect.Ptr || vv.Elem().Kind() != reflect.Map {
@@ -126,10 +136,12 @@ func (s *Stmt) QueryMapContext(ctx context.Context, mp interface{}) (*Rows, erro
return s.QueryContext(ctx, args...)
}
// QueryMap query with map
func (s *Stmt) QueryMap(mp interface{}) (*Rows, error) {
return s.QueryMapContext(context.Background(), mp)
}
// QueryStructContext query with struct
func (s *Stmt) QueryStructContext(ctx context.Context, st interface{}) (*Rows, error) {
vv := reflect.ValueOf(st)
if vv.Kind() != reflect.Ptr || vv.Elem().Kind() != reflect.Struct {
@@ -144,19 +156,23 @@ func (s *Stmt) QueryStructContext(ctx context.Context, st interface{}) (*Rows, e
return s.QueryContext(ctx, args...)
}
// QueryStruct query with struct
func (s *Stmt) QueryStruct(st interface{}) (*Rows, error) {
return s.QueryStructContext(context.Background(), st)
}
// QueryRowContext query row with args
func (s *Stmt) QueryRowContext(ctx context.Context, args ...interface{}) *Row {
rows, err := s.QueryContext(ctx, args...)
return &Row{rows, err}
}
// QueryRow query row with args
func (s *Stmt) QueryRow(args ...interface{}) *Row {
return s.QueryRowContext(context.Background(), args...)
}
// QueryRowMapContext query row with map
func (s *Stmt) QueryRowMapContext(ctx context.Context, mp interface{}) *Row {
vv := reflect.ValueOf(mp)
if vv.Kind() != reflect.Ptr || vv.Elem().Kind() != reflect.Map {
@@ -171,10 +187,12 @@ func (s *Stmt) QueryRowMapContext(ctx context.Context, mp interface{}) *Row {
return s.QueryRowContext(ctx, args...)
}
// QueryRowMap query row with map
func (s *Stmt) QueryRowMap(mp interface{}) *Row {
return s.QueryRowMapContext(context.Background(), mp)
}
// QueryRowStructContext query row with struct
func (s *Stmt) QueryRowStructContext(ctx context.Context, st interface{}) *Row {
vv := reflect.ValueOf(st)
if vv.Kind() != reflect.Ptr || vv.Elem().Kind() != reflect.Struct {
@@ -189,6 +207,7 @@ func (s *Stmt) QueryRowStructContext(ctx context.Context, st interface{}) *Row {
return s.QueryRowContext(ctx, args...)
}
// QueryRowStruct query row with struct
func (s *Stmt) QueryRowStruct(st interface{}) *Row {
return s.QueryRowStructContext(context.Background(), st)
}

35
vendor/xorm.io/xorm/core/tx.go generated vendored
View File

@@ -22,6 +22,7 @@ type Tx struct {
ctx context.Context
}
// BeginTx begin a transaction with option
func (db *DB) BeginTx(ctx context.Context, opts *sql.TxOptions) (*Tx, error) {
hookCtx := contexts.NewContextHook(ctx, "BEGIN TRANSACTION", nil)
ctx, err := db.beforeProcess(hookCtx)
@@ -36,10 +37,12 @@ func (db *DB) BeginTx(ctx context.Context, opts *sql.TxOptions) (*Tx, error) {
return &Tx{tx, db, ctx}, nil
}
// Begin begins a transaction
func (db *DB) Begin() (*Tx, error) {
return db.BeginTx(context.Background(), nil)
}
// Commit submit the transaction
func (tx *Tx) Commit() error {
hookCtx := contexts.NewContextHook(tx.ctx, "COMMIT", nil)
ctx, err := tx.db.beforeProcess(hookCtx)
@@ -48,12 +51,10 @@ func (tx *Tx) Commit() error {
}
err = tx.Tx.Commit()
hookCtx.End(ctx, nil, err)
if err := tx.db.afterProcess(hookCtx); err != nil {
return err
}
return nil
return tx.db.afterProcess(hookCtx)
}
// Rollback rollback the transaction
func (tx *Tx) Rollback() error {
hookCtx := contexts.NewContextHook(tx.ctx, "ROLLBACK", nil)
ctx, err := tx.db.beforeProcess(hookCtx)
@@ -62,12 +63,10 @@ func (tx *Tx) Rollback() error {
}
err = tx.Tx.Rollback()
hookCtx.End(ctx, nil, err)
if err := tx.db.afterProcess(hookCtx); err != nil {
return err
}
return nil
return tx.db.afterProcess(hookCtx)
}
// PrepareContext prepare the query
func (tx *Tx) PrepareContext(ctx context.Context, query string) (*Stmt, error) {
names := make(map[string]int)
var i int
@@ -89,19 +88,23 @@ func (tx *Tx) PrepareContext(ctx context.Context, query string) (*Stmt, error) {
return &Stmt{stmt, tx.db, names, query}, nil
}
// Prepare prepare the query
func (tx *Tx) Prepare(query string) (*Stmt, error) {
return tx.PrepareContext(context.Background(), query)
}
// StmtContext creates Stmt with context
func (tx *Tx) StmtContext(ctx context.Context, stmt *Stmt) *Stmt {
stmt.Stmt = tx.Tx.StmtContext(ctx, stmt.Stmt)
return stmt
}
// Stmt creates Stmt
func (tx *Tx) Stmt(stmt *Stmt) *Stmt {
return tx.StmtContext(context.Background(), stmt)
}
// ExecMapContext executes query with args in a map
func (tx *Tx) ExecMapContext(ctx context.Context, query string, mp interface{}) (sql.Result, error) {
query, args, err := MapToSlice(query, mp)
if err != nil {
@@ -110,10 +113,12 @@ func (tx *Tx) ExecMapContext(ctx context.Context, query string, mp interface{})
return tx.ExecContext(ctx, query, args...)
}
// ExecMap executes query with args in a map
func (tx *Tx) ExecMap(query string, mp interface{}) (sql.Result, error) {
return tx.ExecMapContext(context.Background(), query, mp)
}
// ExecStructContext executes query with args in a struct
func (tx *Tx) ExecStructContext(ctx context.Context, query string, st interface{}) (sql.Result, error) {
query, args, err := StructToSlice(query, st)
if err != nil {
@@ -122,6 +127,7 @@ func (tx *Tx) ExecStructContext(ctx context.Context, query string, st interface{
return tx.ExecContext(ctx, query, args...)
}
// ExecContext executes a query with args
func (tx *Tx) ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error) {
hookCtx := contexts.NewContextHook(ctx, query, args)
ctx, err := tx.db.beforeProcess(hookCtx)
@@ -136,10 +142,12 @@ func (tx *Tx) ExecContext(ctx context.Context, query string, args ...interface{}
return res, err
}
// ExecStruct executes query with args in a struct
func (tx *Tx) ExecStruct(query string, st interface{}) (sql.Result, error) {
return tx.ExecStructContext(context.Background(), query, st)
}
// QueryContext query with args
func (tx *Tx) QueryContext(ctx context.Context, query string, args ...interface{}) (*Rows, error) {
hookCtx := contexts.NewContextHook(ctx, query, args)
ctx, err := tx.db.beforeProcess(hookCtx)
@@ -157,10 +165,12 @@ func (tx *Tx) QueryContext(ctx context.Context, query string, args ...interface{
return &Rows{rows, tx.db}, nil
}
// Query query with args
func (tx *Tx) Query(query string, args ...interface{}) (*Rows, error) {
return tx.QueryContext(context.Background(), query, args...)
}
// QueryMapContext query with args in a map
func (tx *Tx) QueryMapContext(ctx context.Context, query string, mp interface{}) (*Rows, error) {
query, args, err := MapToSlice(query, mp)
if err != nil {
@@ -169,10 +179,12 @@ func (tx *Tx) QueryMapContext(ctx context.Context, query string, mp interface{})
return tx.QueryContext(ctx, query, args...)
}
// QueryMap query with args in a map
func (tx *Tx) QueryMap(query string, mp interface{}) (*Rows, error) {
return tx.QueryMapContext(context.Background(), query, mp)
}
// QueryStructContext query with args in struct
func (tx *Tx) QueryStructContext(ctx context.Context, query string, st interface{}) (*Rows, error) {
query, args, err := StructToSlice(query, st)
if err != nil {
@@ -181,19 +193,23 @@ func (tx *Tx) QueryStructContext(ctx context.Context, query string, st interface
return tx.QueryContext(ctx, query, args...)
}
// QueryStruct query with args in struct
func (tx *Tx) QueryStruct(query string, st interface{}) (*Rows, error) {
return tx.QueryStructContext(context.Background(), query, st)
}
// QueryRowContext query one row with args
func (tx *Tx) QueryRowContext(ctx context.Context, query string, args ...interface{}) *Row {
rows, err := tx.QueryContext(ctx, query, args...)
return &Row{rows, err}
}
// QueryRow query one row with args
func (tx *Tx) QueryRow(query string, args ...interface{}) *Row {
return tx.QueryRowContext(context.Background(), query, args...)
}
// QueryRowMapContext query one row with args in a map
func (tx *Tx) QueryRowMapContext(ctx context.Context, query string, mp interface{}) *Row {
query, args, err := MapToSlice(query, mp)
if err != nil {
@@ -202,10 +218,12 @@ func (tx *Tx) QueryRowMapContext(ctx context.Context, query string, mp interface
return tx.QueryRowContext(ctx, query, args...)
}
// QueryRowMap query one row with args in a map
func (tx *Tx) QueryRowMap(query string, mp interface{}) *Row {
return tx.QueryRowMapContext(context.Background(), query, mp)
}
// QueryRowStructContext query one row with args in struct
func (tx *Tx) QueryRowStructContext(ctx context.Context, query string, st interface{}) *Row {
query, args, err := StructToSlice(query, st)
if err != nil {
@@ -214,6 +232,7 @@ func (tx *Tx) QueryRowStructContext(ctx context.Context, query string, st interf
return tx.QueryRowContext(ctx, query, args...)
}
// QueryRowStruct query one row with args in struct
func (tx *Tx) QueryRowStruct(query string, st interface{}) *Row {
return tx.QueryRowStructContext(context.Background(), query, st)
}

Some files were not shown because too many files have changed in this diff Show More