mirror of
https://github.com/go-gitea/gitea.git
synced 2025-11-05 18:32:41 +09:00
Compare commits
58 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d0b8e3c8e1 | ||
|
|
7ff8e863a5 | ||
|
|
c65e49d72f | ||
|
|
50084daa4c | ||
|
|
c7db7438b7 | ||
|
|
e11f042a95 | ||
|
|
87782636e6 | ||
|
|
b935472cdf | ||
|
|
8ac48584ec | ||
|
|
e898590c81 | ||
|
|
d407857d97 | ||
|
|
8cfd6695da | ||
|
|
f832e8eeea | ||
|
|
544ef7d394 | ||
|
|
5ff807acde | ||
|
|
849d316d8d | ||
|
|
946eb1321c | ||
|
|
bc82bb9cda | ||
|
|
f034804e5d | ||
|
|
c1887bfc9b | ||
|
|
41a4047e79 | ||
|
|
ac84bb7183 | ||
|
|
3be67e9a2b | ||
|
|
ce2ade05e6 | ||
|
|
1e76f7b5b7 | ||
|
|
2265058c31 | ||
|
|
ba74fdbda9 | ||
|
|
0600f7972a | ||
|
|
8007602b40 | ||
|
|
3a79f1190f | ||
|
|
d95489b7ed | ||
|
|
a9e1a37b71 | ||
|
|
5a589ef9ec | ||
|
|
159bc8842a | ||
|
|
4b771d393e | ||
|
|
0c2cbfcb3b | ||
|
|
8c4bf4c3b4 | ||
|
|
3bcf2e5c18 | ||
|
|
ad54f008ac | ||
|
|
c21167e3a2 | ||
|
|
aaa539dd2d | ||
|
|
e38134f707 | ||
|
|
fa96ddb327 | ||
|
|
a3e8450fd5 | ||
|
|
41422f0df0 | ||
|
|
f773733252 | ||
|
|
cbaf8e8785 | ||
|
|
1bf46836da | ||
|
|
387a1bc472 | ||
|
|
62daf84596 | ||
|
|
39d209dccc | ||
|
|
c88392e772 | ||
|
|
a83cde2f3f | ||
|
|
332eb2f6d2 | ||
|
|
3ae1d7a59f | ||
|
|
d054c4e7f3 | ||
|
|
5e562e9b30 | ||
|
|
c57e908f36 |
@@ -522,7 +522,7 @@ steps:
|
|||||||
image: plugins/s3:1
|
image: plugins/s3:1
|
||||||
settings:
|
settings:
|
||||||
acl: public-read
|
acl: public-read
|
||||||
bucket: releases
|
bucket: gitea-artifacts
|
||||||
endpoint: https://storage.gitea.io
|
endpoint: https://storage.gitea.io
|
||||||
path_style: true
|
path_style: true
|
||||||
source: "dist/release/*"
|
source: "dist/release/*"
|
||||||
@@ -543,7 +543,7 @@ steps:
|
|||||||
image: plugins/s3:1
|
image: plugins/s3:1
|
||||||
settings:
|
settings:
|
||||||
acl: public-read
|
acl: public-read
|
||||||
bucket: releases
|
bucket: gitea-artifacts
|
||||||
endpoint: https://storage.gitea.io
|
endpoint: https://storage.gitea.io
|
||||||
path_style: true
|
path_style: true
|
||||||
source: "dist/release/*"
|
source: "dist/release/*"
|
||||||
@@ -618,7 +618,7 @@ steps:
|
|||||||
image: plugins/s3:1
|
image: plugins/s3:1
|
||||||
settings:
|
settings:
|
||||||
acl: public-read
|
acl: public-read
|
||||||
bucket: releases
|
bucket: gitea-artifacts
|
||||||
endpoint: https://storage.gitea.io
|
endpoint: https://storage.gitea.io
|
||||||
path_style: true
|
path_style: true
|
||||||
source: "dist/release/*"
|
source: "dist/release/*"
|
||||||
|
|||||||
63
CHANGELOG.md
63
CHANGELOG.md
@@ -4,6 +4,69 @@ This changelog goes through all the changes that have been made in each release
|
|||||||
without substantial changes to our git log; to see the highlights of what has
|
without substantial changes to our git log; to see the highlights of what has
|
||||||
been added to each release, please refer to the [blog](https://blog.gitea.io).
|
been added to each release, please refer to the [blog](https://blog.gitea.io).
|
||||||
|
|
||||||
|
## [1.14.4](https://github.com/go-gitea/gitea/releases/tag/v1.14.4) - 2021-07-06
|
||||||
|
|
||||||
|
* BUGFIXES
|
||||||
|
* Fix relative links in postprocessed images (#16334) (#16340)
|
||||||
|
* Fix list_options GetStartEnd (#16303) (#16305)
|
||||||
|
* Fix API to use author for commits instead of committer (#16276) (#16277)
|
||||||
|
* Handle misencoding of login_source cfg in mssql (#16268) (#16275)
|
||||||
|
* Fixed issues not updated by commits (#16254) (#16261)
|
||||||
|
* Improve efficiency in FindRenderizableReferenceNumeric and getReference (#16251) (#16255)
|
||||||
|
* Use html.Parse rather than html.ParseFragment (#16223) (#16225)
|
||||||
|
* Fix milestone counters on new issue (#16183) (#16224)
|
||||||
|
* reqOrgMembership calls need to be preceded by reqToken (#16198) (#16219)
|
||||||
|
|
||||||
|
## [1.14.3](https://github.com/go-gitea/gitea/releases/tag/v1.14.3) - 2021-06-10
|
||||||
|
|
||||||
|
* SECURITY
|
||||||
|
* Encrypt migration credentials at rest (#15895) (#16187)
|
||||||
|
* Only check access tokens if they are likely to be tokens (#16164) (#16171)
|
||||||
|
* Add missing SameSite settings for the i_like_gitea cookie (#16037) (#16039)
|
||||||
|
* Fix setting of SameSite on cookies (#15989) (#15991)
|
||||||
|
* API
|
||||||
|
* Repository object only count releases as releases (#16184) (#16190)
|
||||||
|
* EditOrg respect RepoAdminChangeTeamAccess option (#16184) (#16190)
|
||||||
|
* Fix overly strict edit pr permissions (#15900) (#16081)
|
||||||
|
* BUGFIXES
|
||||||
|
* Run processors on whole of text (#16155) (#16185)
|
||||||
|
* Class `issue-keyword` is being incorrectly stripped off spans (#16163) (#16172)
|
||||||
|
* Fix language switch for install page (#16043) (#16128)
|
||||||
|
* Fix bug on getIssueIDsByRepoID (#16119) (#16124)
|
||||||
|
* Set self-adjusting deadline for connection writing (#16068) (#16123)
|
||||||
|
* Fix http path bug (#16117) (#16120)
|
||||||
|
* Fix data URI scramble (#16098) (#16118)
|
||||||
|
* Merge all deleteBranch as one function and also fix bug when delete branch don't close related PRs (#16067) (#16097)
|
||||||
|
* git migration: don't prompt interactively for clone credentials (#15902) (#16082)
|
||||||
|
* Fix case change in ownernames (#16045) (#16050)
|
||||||
|
* Don't manipulate input params in email notification (#16011) (#16033)
|
||||||
|
* Remove branch URL before IssueRefURL (#15968) (#15970)
|
||||||
|
* Fix layout of milestone view (#15927) (#15940)
|
||||||
|
* GitHub Migration, migrate draft releases too (#15884) (#15888)
|
||||||
|
* Close the gitrepo when deleting the repository (#15876) (#15887)
|
||||||
|
* Upgrade xorm to v1.1.0 (#15869) (#15885)
|
||||||
|
* Fix blame row height alignment (#15863) (#15883)
|
||||||
|
* Fix error message when saving generated LOCAL_ROOT_URL config (#15880) (#15882)
|
||||||
|
* Backport Fix LFS commit finder not working (#15856) (#15874)
|
||||||
|
* Stop calling WriteHeader in Write (#15862) (#15873)
|
||||||
|
* Add timeout to writing to responses (#15831) (#15872)
|
||||||
|
* Return go-get info on subdirs (#15642) (#15871)
|
||||||
|
* Restore PAM user autocreation functionality (#15825) (#15867)
|
||||||
|
* Fix truncate utf8 string (#15828) (#15854)
|
||||||
|
* Fix bound address/port for caddy's certmagic library (#15758) (#15848)
|
||||||
|
* Upgrade unrolled/render to v1.1.1 (#15845) (#15846)
|
||||||
|
* Queue manager FlushAll can loop rapidly - add delay (#15733) (#15840)
|
||||||
|
* Tagger can be empty, as can Commit and Author - tolerate this (#15835) (#15839)
|
||||||
|
* Set autocomplete off on branches selector (#15809) (#15833)
|
||||||
|
* Add missing error to Doctor log (#15813) (#15824)
|
||||||
|
* Move restore repo to internal router and invoke from command to avoid open the same db file or queues files (#15790) (#15816)
|
||||||
|
* ENHANCEMENTS
|
||||||
|
* Removable media support to snap package (#16136) (#16138)
|
||||||
|
* Move sans-serif fallback font higher than emoji fonts (#15855) (#15892)
|
||||||
|
* DOCKER
|
||||||
|
* Only write config in environment-to-ini if there are changes (#15861) (#15868)
|
||||||
|
* Only offer hostcertificates if they exist (#15849) (#15853)
|
||||||
|
|
||||||
## [1.14.2](https://github.com/go-gitea/gitea/releases/tag/v1.14.2) - 2021-05-08
|
## [1.14.2](https://github.com/go-gitea/gitea/releases/tag/v1.14.2) - 2021-05-08
|
||||||
|
|
||||||
* API
|
* API
|
||||||
|
|||||||
@@ -5,15 +5,12 @@
|
|||||||
package cmd
|
package cmd
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"errors"
|
||||||
"strings"
|
"net/http"
|
||||||
|
|
||||||
"code.gitea.io/gitea/modules/log"
|
"code.gitea.io/gitea/modules/log"
|
||||||
"code.gitea.io/gitea/modules/migrations"
|
"code.gitea.io/gitea/modules/private"
|
||||||
"code.gitea.io/gitea/modules/migrations/base"
|
|
||||||
"code.gitea.io/gitea/modules/setting"
|
"code.gitea.io/gitea/modules/setting"
|
||||||
"code.gitea.io/gitea/modules/storage"
|
|
||||||
pull_service "code.gitea.io/gitea/services/pull"
|
|
||||||
|
|
||||||
"github.com/urfave/cli"
|
"github.com/urfave/cli"
|
||||||
)
|
)
|
||||||
@@ -50,70 +47,18 @@ wiki, issues, labels, releases, release_assets, milestones, pull_requests, comme
|
|||||||
}
|
}
|
||||||
|
|
||||||
func runRestoreRepository(ctx *cli.Context) error {
|
func runRestoreRepository(ctx *cli.Context) error {
|
||||||
if err := initDB(); err != nil {
|
setting.NewContext()
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Trace("AppPath: %s", setting.AppPath)
|
statusCode, errStr := private.RestoreRepo(
|
||||||
log.Trace("AppWorkPath: %s", setting.AppWorkPath)
|
|
||||||
log.Trace("Custom path: %s", setting.CustomPath)
|
|
||||||
log.Trace("Log path: %s", setting.LogRootPath)
|
|
||||||
setting.InitDBConfig()
|
|
||||||
|
|
||||||
if err := storage.Init(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := pull_service.Init(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
var opts = base.MigrateOptions{
|
|
||||||
RepoName: ctx.String("repo_name"),
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(ctx.String("units")) == 0 {
|
|
||||||
opts.Wiki = true
|
|
||||||
opts.Issues = true
|
|
||||||
opts.Milestones = true
|
|
||||||
opts.Labels = true
|
|
||||||
opts.Releases = true
|
|
||||||
opts.Comments = true
|
|
||||||
opts.PullRequests = true
|
|
||||||
opts.ReleaseAssets = true
|
|
||||||
} else {
|
|
||||||
units := strings.Split(ctx.String("units"), ",")
|
|
||||||
for _, unit := range units {
|
|
||||||
switch strings.ToLower(unit) {
|
|
||||||
case "wiki":
|
|
||||||
opts.Wiki = true
|
|
||||||
case "issues":
|
|
||||||
opts.Issues = true
|
|
||||||
case "milestones":
|
|
||||||
opts.Milestones = true
|
|
||||||
case "labels":
|
|
||||||
opts.Labels = true
|
|
||||||
case "releases":
|
|
||||||
opts.Releases = true
|
|
||||||
case "release_assets":
|
|
||||||
opts.ReleaseAssets = true
|
|
||||||
case "comments":
|
|
||||||
opts.Comments = true
|
|
||||||
case "pull_requests":
|
|
||||||
opts.PullRequests = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := migrations.RestoreRepository(
|
|
||||||
context.Background(),
|
|
||||||
ctx.String("repo_dir"),
|
ctx.String("repo_dir"),
|
||||||
ctx.String("owner_name"),
|
ctx.String("owner_name"),
|
||||||
ctx.String("repo_name"),
|
ctx.String("repo_name"),
|
||||||
); err != nil {
|
ctx.StringSlice("units"),
|
||||||
log.Fatal("Failed to restore repository: %v", err)
|
)
|
||||||
return err
|
if statusCode == http.StatusOK {
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
log.Fatal("Failed to restore repository: %v", errStr)
|
||||||
|
return errors.New(errStr)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -175,7 +175,7 @@ func setPort(port string) error {
|
|||||||
|
|
||||||
cfg.Section("server").Key("LOCAL_ROOT_URL").SetValue(defaultLocalURL)
|
cfg.Section("server").Key("LOCAL_ROOT_URL").SetValue(defaultLocalURL)
|
||||||
if err := cfg.SaveTo(setting.CustomConf); err != nil {
|
if err := cfg.SaveTo(setting.CustomConf); err != nil {
|
||||||
return fmt.Errorf("Error saving generated JWT Secret to custom config: %v", err)
|
return fmt.Errorf("Error saving generated LOCAL_ROOT_URL to custom config: %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ package cmd
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"code.gitea.io/gitea/modules/log"
|
"code.gitea.io/gitea/modules/log"
|
||||||
@@ -22,6 +23,15 @@ func runLetsEncrypt(listenAddr, domain, directory, email string, m http.Handler)
|
|||||||
// TODO: these are placeholders until we add options for each in settings with appropriate warning
|
// TODO: these are placeholders until we add options for each in settings with appropriate warning
|
||||||
enableHTTPChallenge := true
|
enableHTTPChallenge := true
|
||||||
enableTLSALPNChallenge := true
|
enableTLSALPNChallenge := true
|
||||||
|
altHTTPPort := 0
|
||||||
|
altTLSALPNPort := 0
|
||||||
|
|
||||||
|
if p, err := strconv.Atoi(setting.PortToRedirect); err == nil {
|
||||||
|
altHTTPPort = p
|
||||||
|
}
|
||||||
|
if p, err := strconv.Atoi(setting.HTTPPort); err == nil {
|
||||||
|
altTLSALPNPort = p
|
||||||
|
}
|
||||||
|
|
||||||
magic := certmagic.NewDefault()
|
magic := certmagic.NewDefault()
|
||||||
magic.Storage = &certmagic.FileStorage{Path: directory}
|
magic.Storage = &certmagic.FileStorage{Path: directory}
|
||||||
@@ -30,6 +40,9 @@ func runLetsEncrypt(listenAddr, domain, directory, email string, m http.Handler)
|
|||||||
Agreed: setting.LetsEncryptTOS,
|
Agreed: setting.LetsEncryptTOS,
|
||||||
DisableHTTPChallenge: !enableHTTPChallenge,
|
DisableHTTPChallenge: !enableHTTPChallenge,
|
||||||
DisableTLSALPNChallenge: !enableTLSALPNChallenge,
|
DisableTLSALPNChallenge: !enableTLSALPNChallenge,
|
||||||
|
ListenHost: setting.HTTPAddr,
|
||||||
|
AltTLSALPNPort: altTLSALPNPort,
|
||||||
|
AltHTTPPort: altHTTPPort,
|
||||||
})
|
})
|
||||||
|
|
||||||
magic.Issuer = myACME
|
magic.Issuer = myACME
|
||||||
|
|||||||
@@ -110,6 +110,8 @@ func runEnvironmentToIni(c *cli.Context) error {
|
|||||||
}
|
}
|
||||||
cfg.NameMapper = ini.SnackCase
|
cfg.NameMapper = ini.SnackCase
|
||||||
|
|
||||||
|
changed := false
|
||||||
|
|
||||||
prefix := c.String("prefix") + "__"
|
prefix := c.String("prefix") + "__"
|
||||||
|
|
||||||
for _, kv := range os.Environ() {
|
for _, kv := range os.Environ() {
|
||||||
@@ -143,15 +145,21 @@ func runEnvironmentToIni(c *cli.Context) error {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
oldValue := key.Value()
|
||||||
|
if !changed && oldValue != value {
|
||||||
|
changed = true
|
||||||
|
}
|
||||||
key.SetValue(value)
|
key.SetValue(value)
|
||||||
}
|
}
|
||||||
destination := c.String("out")
|
destination := c.String("out")
|
||||||
if len(destination) == 0 {
|
if len(destination) == 0 {
|
||||||
destination = setting.CustomConf
|
destination = setting.CustomConf
|
||||||
}
|
}
|
||||||
err = cfg.SaveTo(destination)
|
if destination != setting.CustomConf || changed {
|
||||||
if err != nil {
|
err = cfg.SaveTo(destination)
|
||||||
return err
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if c.Bool("clear") {
|
if c.Bool("clear") {
|
||||||
for _, kv := range os.Environ() {
|
for _, kv := range os.Environ() {
|
||||||
|
|||||||
@@ -281,6 +281,10 @@ HTTP_PORT = 3000
|
|||||||
; PORT_TO_REDIRECT.
|
; PORT_TO_REDIRECT.
|
||||||
REDIRECT_OTHER_PORT = false
|
REDIRECT_OTHER_PORT = false
|
||||||
PORT_TO_REDIRECT = 80
|
PORT_TO_REDIRECT = 80
|
||||||
|
; Timeout for any write to the connection. (Set to 0 to disable all timeouts.)
|
||||||
|
PER_WRITE_TIMEOUT = 30s
|
||||||
|
; Timeout per Kb written to connections.
|
||||||
|
PER_WRITE_PER_KB_TIMEOUT = 30s
|
||||||
; Permission for unix socket
|
; Permission for unix socket
|
||||||
UNIX_SOCKET_PERMISSION = 666
|
UNIX_SOCKET_PERMISSION = 666
|
||||||
; Local (DMZ) URL for Gitea workers (such as SSH update) accessing web service.
|
; Local (DMZ) URL for Gitea workers (such as SSH update) accessing web service.
|
||||||
|
|||||||
@@ -24,9 +24,29 @@ if [ ! -f /data/ssh/ssh_host_ecdsa_key ]; then
|
|||||||
ssh-keygen -t ecdsa -b 256 -f /data/ssh/ssh_host_ecdsa_key -N "" > /dev/null
|
ssh-keygen -t ecdsa -b 256 -f /data/ssh/ssh_host_ecdsa_key -N "" > /dev/null
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [ -e /data/ssh/ssh_host_ed25519_cert ]; then
|
||||||
|
SSH_ED25519_CERT=${SSH_ED25519_CERT:-"/data/ssh/ssh_host_ed25519_cert"}
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -e /data/ssh/ssh_host_rsa_cert ]; then
|
||||||
|
SSH_RSA_CERT=${SSH_RSA_CERT:-"/data/ssh/ssh_host_rsa_cert"}
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -e /data/ssh/ssh_host_ecdsa_cert ]; then
|
||||||
|
SSH_ECDSA_CERT=${SSH_ECDSA_CERT:-"/data/ssh/ssh_host_ecdsa_cert"}
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -e /data/ssh/ssh_host_dsa_cert ]; then
|
||||||
|
SSH_DSA_CERT=${SSH_DSA_CERT:-"/data/ssh/ssh_host_dsa_cert"}
|
||||||
|
fi
|
||||||
|
|
||||||
if [ -d /etc/ssh ]; then
|
if [ -d /etc/ssh ]; then
|
||||||
SSH_PORT=${SSH_PORT:-"22"} \
|
SSH_PORT=${SSH_PORT:-"22"} \
|
||||||
SSH_LISTEN_PORT=${SSH_LISTEN_PORT:-"${SSH_PORT}"} \
|
SSH_LISTEN_PORT=${SSH_LISTEN_PORT:-"${SSH_PORT}"} \
|
||||||
|
SSH_ED25519_CERT="${SSH_ED25519_CERT:+"HostCertificate "}${SSH_ED25519_CERT}" \
|
||||||
|
SSH_RSA_CERT="${SSH_RSA_CERT:+"HostCertificate "}${SSH_RSA_CERT}" \
|
||||||
|
SSH_ECDSA_CERT="${SSH_ECDSA_CERT:+"HostCertificate "}${SSH_ECDSA_CERT}" \
|
||||||
|
SSH_DSA_CERT="${SSH_DSA_CERT:+"HostCertificate "}${SSH_DSA_CERT}" \
|
||||||
envsubst < /etc/templates/sshd_config > /etc/ssh/sshd_config
|
envsubst < /etc/templates/sshd_config > /etc/ssh/sshd_config
|
||||||
|
|
||||||
chmod 0644 /etc/ssh/sshd_config
|
chmod 0644 /etc/ssh/sshd_config
|
||||||
|
|||||||
@@ -8,13 +8,13 @@ ListenAddress ::
|
|||||||
LogLevel INFO
|
LogLevel INFO
|
||||||
|
|
||||||
HostKey /data/ssh/ssh_host_ed25519_key
|
HostKey /data/ssh/ssh_host_ed25519_key
|
||||||
HostCertificate /data/ssh/ssh_host_ed25519_cert
|
${SSH_ED25519_CERT}
|
||||||
HostKey /data/ssh/ssh_host_rsa_key
|
HostKey /data/ssh/ssh_host_rsa_key
|
||||||
HostCertificate /data/ssh/ssh_host_rsa_cert
|
${SSH_RSA_CERT}
|
||||||
HostKey /data/ssh/ssh_host_ecdsa_key
|
HostKey /data/ssh/ssh_host_ecdsa_key
|
||||||
HostCertificate /data/ssh/ssh_host_ecdsa_cert
|
${SSH_ECDSA_CERT}
|
||||||
HostKey /data/ssh/ssh_host_dsa_key
|
HostKey /data/ssh/ssh_host_dsa_key
|
||||||
HostCertificate /data/ssh/ssh_host_dsa_cert
|
${SSH_DSA_CERT}
|
||||||
|
|
||||||
AuthorizedKeysFile .ssh/authorized_keys
|
AuthorizedKeysFile .ssh/authorized_keys
|
||||||
AuthorizedPrincipalsFile .ssh/authorized_principals
|
AuthorizedPrincipalsFile .ssh/authorized_principals
|
||||||
|
|||||||
@@ -31,4 +31,4 @@ update: $(THEME)
|
|||||||
$(THEME): $(THEME)/theme.toml
|
$(THEME): $(THEME)/theme.toml
|
||||||
$(THEME)/theme.toml:
|
$(THEME)/theme.toml:
|
||||||
mkdir -p $$(dirname $@)
|
mkdir -p $$(dirname $@)
|
||||||
curl -s $(ARCHIVE) | tar xz -C $$(dirname $@)
|
curl -L -s $(ARCHIVE) | tar xz -C $$(dirname $@)
|
||||||
|
|||||||
@@ -237,6 +237,9 @@ Values containing `#` or `;` must be quoted using `` ` `` or `"""`.
|
|||||||
most cases you do not need to change the default value. Alter it only if
|
most cases you do not need to change the default value. Alter it only if
|
||||||
your SSH server node is not the same as HTTP node. Do not set this variable
|
your SSH server node is not the same as HTTP node. Do not set this variable
|
||||||
if `PROTOCOL` is set to `unix`.
|
if `PROTOCOL` is set to `unix`.
|
||||||
|
- `PER_WRITE_TIMEOUT`: **30s**: Timeout for any write to the connection. (Set to 0 to
|
||||||
|
disable all timeouts.)
|
||||||
|
- `PER_WRITE_PER_KB_TIMEOUT`: **10s**: Timeout per Kb written to connections.
|
||||||
|
|
||||||
- `DISABLE_SSH`: **false**: Disable SSH feature when it's not available.
|
- `DISABLE_SSH`: **false**: Disable SSH feature when it's not available.
|
||||||
- `START_SSH_SERVER`: **false**: When enabled, use the built-in SSH server.
|
- `START_SSH_SERVER`: **false**: When enabled, use the built-in SSH server.
|
||||||
@@ -260,6 +263,9 @@ Values containing `#` or `;` must be quoted using `` ` `` or `"""`.
|
|||||||
- `SSH_KEY_TEST_PATH`: **/tmp**: Directory to create temporary files in when testing public keys using ssh-keygen, default is the system temporary directory.
|
- `SSH_KEY_TEST_PATH`: **/tmp**: Directory to create temporary files in when testing public keys using ssh-keygen, default is the system temporary directory.
|
||||||
- `SSH_KEYGEN_PATH`: **ssh-keygen**: Path to ssh-keygen, default is 'ssh-keygen' which means the shell is responsible for finding out which one to call.
|
- `SSH_KEYGEN_PATH`: **ssh-keygen**: Path to ssh-keygen, default is 'ssh-keygen' which means the shell is responsible for finding out which one to call.
|
||||||
- `SSH_EXPOSE_ANONYMOUS`: **false**: Enable exposure of SSH clone URL to anonymous visitors, default is false.
|
- `SSH_EXPOSE_ANONYMOUS`: **false**: Enable exposure of SSH clone URL to anonymous visitors, default is false.
|
||||||
|
- `SSH_PER_WRITE_TIMEOUT`: **30s**: Timeout for any write to the SSH connections. (Set to
|
||||||
|
0 to disable all timeouts.)
|
||||||
|
- `SSH_PER_WRITE_PER_KB_TIMEOUT`: **10s**: Timeout per Kb written to SSH connections.
|
||||||
- `MINIMUM_KEY_SIZE_CHECK`: **true**: Indicate whether to check minimum key size with corresponding type.
|
- `MINIMUM_KEY_SIZE_CHECK`: **true**: Indicate whether to check minimum key size with corresponding type.
|
||||||
|
|
||||||
- `OFFLINE_MODE`: **false**: Disables use of CDN for static files and Gravatar for profile pictures.
|
- `OFFLINE_MODE`: **false**: Disables use of CDN for static files and Gravatar for profile pictures.
|
||||||
|
|||||||
4
go.mod
4
go.mod
@@ -122,7 +122,7 @@ require (
|
|||||||
github.com/unknwon/com v1.0.1
|
github.com/unknwon/com v1.0.1
|
||||||
github.com/unknwon/i18n v0.0.0-20200823051745-09abd91c7f2c
|
github.com/unknwon/i18n v0.0.0-20200823051745-09abd91c7f2c
|
||||||
github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae
|
github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae
|
||||||
github.com/unrolled/render v1.1.0
|
github.com/unrolled/render v1.1.1
|
||||||
github.com/urfave/cli v1.22.5
|
github.com/urfave/cli v1.22.5
|
||||||
github.com/willf/bitset v1.1.11 // indirect
|
github.com/willf/bitset v1.1.11 // indirect
|
||||||
github.com/xanzy/go-gitlab v0.44.0
|
github.com/xanzy/go-gitlab v0.44.0
|
||||||
@@ -149,7 +149,7 @@ require (
|
|||||||
mvdan.cc/xurls/v2 v2.2.0
|
mvdan.cc/xurls/v2 v2.2.0
|
||||||
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251
|
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251
|
||||||
xorm.io/builder v0.3.9
|
xorm.io/builder v0.3.9
|
||||||
xorm.io/xorm v1.0.7
|
xorm.io/xorm v1.1.0
|
||||||
)
|
)
|
||||||
|
|
||||||
replace github.com/hashicorp/go-version => github.com/6543/go-version v1.2.4
|
replace github.com/hashicorp/go-version => github.com/6543/go-version v1.2.4
|
||||||
|
|||||||
41
go.sum
41
go.sum
@@ -996,6 +996,8 @@ github.com/quasoft/websspi v1.0.0 h1:5nDgdM5xSur9s+B5w2xQ5kxf5nUGqgFgU4W0aDLZ8Mw
|
|||||||
github.com/quasoft/websspi v1.0.0/go.mod h1:HmVdl939dQ0WIXZhyik+ARdI03M6bQzaSEKcgpFmewk=
|
github.com/quasoft/websspi v1.0.0/go.mod h1:HmVdl939dQ0WIXZhyik+ARdI03M6bQzaSEKcgpFmewk=
|
||||||
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
|
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
|
||||||
github.com/rcrowley/go-metrics v0.0.0-20190826022208-cac0b30c2563/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
|
github.com/rcrowley/go-metrics v0.0.0-20190826022208-cac0b30c2563/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
|
||||||
|
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0 h1:OdAsTTz6OkFY5QxjkYwrChwuRruF69c169dPK26NUlk=
|
||||||
|
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||||
github.com/rivo/uniseg v0.1.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
github.com/rivo/uniseg v0.1.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||||
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
|
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
|
||||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||||
@@ -1113,10 +1115,8 @@ github.com/unknwon/i18n v0.0.0-20200823051745-09abd91c7f2c h1:679/gJXwrsHC3RATr0
|
|||||||
github.com/unknwon/i18n v0.0.0-20200823051745-09abd91c7f2c/go.mod h1:+5rDk6sDGpl3azws3O+f+GpFSyN9GVr0K8cvQLQM2ZQ=
|
github.com/unknwon/i18n v0.0.0-20200823051745-09abd91c7f2c/go.mod h1:+5rDk6sDGpl3azws3O+f+GpFSyN9GVr0K8cvQLQM2ZQ=
|
||||||
github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae h1:ihaXiJkaca54IaCSnEXtE/uSZOmPxKZhDfVLrzZLFDs=
|
github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae h1:ihaXiJkaca54IaCSnEXtE/uSZOmPxKZhDfVLrzZLFDs=
|
||||||
github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae/go.mod h1:1fdkY6xxl6ExVs2QFv7R0F5IRZHKA8RahhB9fMC9RvM=
|
github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae/go.mod h1:1fdkY6xxl6ExVs2QFv7R0F5IRZHKA8RahhB9fMC9RvM=
|
||||||
github.com/unrolled/render v1.0.3 h1:baO+NG1bZSF2WR4zwh+0bMWauWky7DVrTOfvE2w+aFo=
|
github.com/unrolled/render v1.1.1 h1:FpzNzkvlJQIlVdVaqeVBGWiCS8gpbmjtrKpDmCn6p64=
|
||||||
github.com/unrolled/render v1.0.3/go.mod h1:gN9T0NhL4Bfbwu8ann7Ry/TGHYfosul+J0obPf6NBdM=
|
github.com/unrolled/render v1.1.1/go.mod h1:gN9T0NhL4Bfbwu8ann7Ry/TGHYfosul+J0obPf6NBdM=
|
||||||
github.com/unrolled/render v1.1.0 h1:gvpR9hHxTt6DcGqRYuVVFcfd8rtK+nyEPUJN06KB57Q=
|
|
||||||
github.com/unrolled/render v1.1.0/go.mod h1:gN9T0NhL4Bfbwu8ann7Ry/TGHYfosul+J0obPf6NBdM=
|
|
||||||
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
|
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
|
||||||
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
|
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
|
||||||
github.com/urfave/cli v1.22.5 h1:lNq9sAHXK2qfdI8W+GRItjCEkI+2oR4d+MEHy1CKXoU=
|
github.com/urfave/cli v1.22.5 h1:lNq9sAHXK2qfdI8W+GRItjCEkI+2oR4d+MEHy1CKXoU=
|
||||||
@@ -1502,6 +1502,7 @@ golang.org/x/tools v0.0.0-20200928182047-19e03678916f/go.mod h1:z6u4i615ZeAfBE4X
|
|||||||
golang.org/x/tools v0.0.0-20200929161345-d7fc70abf50f/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
|
golang.org/x/tools v0.0.0-20200929161345-d7fc70abf50f/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
|
||||||
golang.org/x/tools v0.0.0-20201022035929-9cf592e881e9/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
golang.org/x/tools v0.0.0-20201022035929-9cf592e881e9/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
|
golang.org/x/tools v0.0.0-20201124115921-2c860bdd6e78/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
golang.org/x/tools v0.0.0-20201125231158-b5590deeca9b/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
golang.org/x/tools v0.0.0-20201125231158-b5590deeca9b/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
@@ -1668,6 +1669,33 @@ honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt
|
|||||||
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||||
honnef.co/go/tools v0.0.1-2020.1.4 h1:UoveltGrhghAA7ePc+e+QYDHXrBps2PqFZiHkGR/xK8=
|
honnef.co/go/tools v0.0.1-2020.1.4 h1:UoveltGrhghAA7ePc+e+QYDHXrBps2PqFZiHkGR/xK8=
|
||||||
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||||
|
modernc.org/cc/v3 v3.31.5-0.20210308123301-7a3e9dab9009 h1:u0oCo5b9wyLr++HF3AN9JicGhkUxJhMz51+8TIZH9N0=
|
||||||
|
modernc.org/cc/v3 v3.31.5-0.20210308123301-7a3e9dab9009/go.mod h1:0R6jl1aZlIl2avnYfbfHBS1QB6/f+16mihBObaBC878=
|
||||||
|
modernc.org/ccgo/v3 v3.9.0 h1:JbcEIqjw4Agf+0g3Tc85YvfYqkkFOv6xBwS4zkfqSoA=
|
||||||
|
modernc.org/ccgo/v3 v3.9.0/go.mod h1:nQbgkn8mwzPdp4mm6BT6+p85ugQ7FrGgIcYaE7nSrpY=
|
||||||
|
modernc.org/httpfs v1.0.6 h1:AAgIpFZRXuYnkjftxTAZwMIiwEqAfk8aVB2/oA6nAeM=
|
||||||
|
modernc.org/httpfs v1.0.6/go.mod h1:7dosgurJGp0sPaRanU53W4xZYKh14wfzX420oZADeHM=
|
||||||
|
modernc.org/libc v1.7.13-0.20210308123627-12f642a52bb8/go.mod h1:U1eq8YWr/Kc1RWCMFUWEdkTg8OTcfLw2kY8EDwl039w=
|
||||||
|
modernc.org/libc v1.8.0 h1:Pp4uv9g0csgBMpGPABKtkieF6O5MGhfGo6ZiOdlYfR8=
|
||||||
|
modernc.org/libc v1.8.0/go.mod h1:U1eq8YWr/Kc1RWCMFUWEdkTg8OTcfLw2kY8EDwl039w=
|
||||||
|
modernc.org/mathutil v1.1.1/go.mod h1:mZW8CKdRPY1v87qxC/wUdX5O1qDzXMP5TH3wjfpga6E=
|
||||||
|
modernc.org/mathutil v1.2.2 h1:+yFk8hBprV+4c0U9GjFtL+dV3N8hOJ8JCituQcMShFY=
|
||||||
|
modernc.org/mathutil v1.2.2/go.mod h1:mZW8CKdRPY1v87qxC/wUdX5O1qDzXMP5TH3wjfpga6E=
|
||||||
|
modernc.org/memory v1.0.4 h1:utMBrFcpnQDdNsmM6asmyH/FM9TqLPS7XF7otpJmrwM=
|
||||||
|
modernc.org/memory v1.0.4/go.mod h1:nV2OApxradM3/OVbs2/0OsP6nPfakXpi50C7dcoHXlc=
|
||||||
|
modernc.org/opt v0.1.1 h1:/0RX92k9vwVeDXj+Xn23DKp2VJubL7k8qNffND6qn3A=
|
||||||
|
modernc.org/opt v0.1.1/go.mod h1:WdSiB5evDcignE70guQKxYUl14mgWtbClRi5wmkkTX0=
|
||||||
|
modernc.org/sqlite v1.10.1-0.20210314190707-798bbeb9bb84 h1:rgEUzE849tFlHSoeCrKyS9cZAljC+DY7MdMHKq6R6sY=
|
||||||
|
modernc.org/sqlite v1.10.1-0.20210314190707-798bbeb9bb84/go.mod h1:PGzq6qlhyYjL6uVbSgS6WoF7ZopTW/sI7+7p+mb4ZVU=
|
||||||
|
modernc.org/strutil v1.1.0 h1:+1/yCzZxY2pZwwrsbH+4T7BQMoLQ9QiBshRC9eicYsc=
|
||||||
|
modernc.org/strutil v1.1.0/go.mod h1:lstksw84oURvj9y3tn8lGvRxyRC1S2+g5uuIzNfIOBs=
|
||||||
|
modernc.org/tcl v1.5.0 h1:euZSUNfE0Fd4W8VqXI1Ly1v7fqDJoBuAV88Ea+SnaSs=
|
||||||
|
modernc.org/tcl v1.5.0/go.mod h1:gb57hj4pO8fRrK54zveIfFXBaMHK3SKJNWcmRw1cRzc=
|
||||||
|
modernc.org/token v1.0.0 h1:a0jaWiNMDhDUtqOj09wvjWWAqd3q7WpBulmL9H2egsk=
|
||||||
|
modernc.org/token v1.0.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
|
||||||
|
modernc.org/z v1.0.1-0.20210308123920-1f282aa71362/go.mod h1:8/SRk5C/HgiQWCgXdfpb+1RvhORdkz5sw72d3jjtyqA=
|
||||||
|
modernc.org/z v1.0.1 h1:WyIDpEpAIx4Hel6q/Pcgj/VhaQV5XPJ2I6ryIYbjnpc=
|
||||||
|
modernc.org/z v1.0.1/go.mod h1:8/SRk5C/HgiQWCgXdfpb+1RvhORdkz5sw72d3jjtyqA=
|
||||||
mvdan.cc/xurls/v2 v2.2.0 h1:NSZPykBXJFCetGZykLAxaL6SIpvbVy/UFEniIfHAa8A=
|
mvdan.cc/xurls/v2 v2.2.0 h1:NSZPykBXJFCetGZykLAxaL6SIpvbVy/UFEniIfHAa8A=
|
||||||
mvdan.cc/xurls/v2 v2.2.0/go.mod h1:EV1RMtya9D6G5DMYPGD8zTQzaHet6Jh8gFlRgGRJeO8=
|
mvdan.cc/xurls/v2 v2.2.0/go.mod h1:EV1RMtya9D6G5DMYPGD8zTQzaHet6Jh8gFlRgGRJeO8=
|
||||||
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
|
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
|
||||||
@@ -1678,8 +1706,9 @@ sourcegraph.com/sourcegraph/appdash v0.0.0-20190731080439-ebfcffb1b5c0/go.mod h1
|
|||||||
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251 h1:mUcz5b3FJbP5Cvdq7Khzn6J9OCUQJaBwgBkCR+MOwSs=
|
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251 h1:mUcz5b3FJbP5Cvdq7Khzn6J9OCUQJaBwgBkCR+MOwSs=
|
||||||
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251/go.mod h1:FJGmPh3vz9jSos1L/F91iAgnC/aejc0wIIrF2ZwJxdY=
|
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251/go.mod h1:FJGmPh3vz9jSos1L/F91iAgnC/aejc0wIIrF2ZwJxdY=
|
||||||
xorm.io/builder v0.3.7/go.mod h1:aUW0S9eb9VCaPohFCH3j7czOx1PMW3i1HrSzbLYGBSE=
|
xorm.io/builder v0.3.7/go.mod h1:aUW0S9eb9VCaPohFCH3j7czOx1PMW3i1HrSzbLYGBSE=
|
||||||
|
xorm.io/builder v0.3.8/go.mod h1:aUW0S9eb9VCaPohFCH3j7czOx1PMW3i1HrSzbLYGBSE=
|
||||||
xorm.io/builder v0.3.9 h1:Sd65/LdWyO7LR8+Cbd+e7mm3sK/7U9k0jS3999IDHMc=
|
xorm.io/builder v0.3.9 h1:Sd65/LdWyO7LR8+Cbd+e7mm3sK/7U9k0jS3999IDHMc=
|
||||||
xorm.io/builder v0.3.9/go.mod h1:aUW0S9eb9VCaPohFCH3j7czOx1PMW3i1HrSzbLYGBSE=
|
xorm.io/builder v0.3.9/go.mod h1:aUW0S9eb9VCaPohFCH3j7czOx1PMW3i1HrSzbLYGBSE=
|
||||||
xorm.io/xorm v1.0.6/go.mod h1:uF9EtbhODq5kNWxMbnBEj8hRRZnlcNSz2t2N7HW/+A4=
|
xorm.io/xorm v1.0.6/go.mod h1:uF9EtbhODq5kNWxMbnBEj8hRRZnlcNSz2t2N7HW/+A4=
|
||||||
xorm.io/xorm v1.0.7 h1:26yBTDVI+CfQpVz2Y88fISh+aiJXIPP4eNoTJlwzsC4=
|
xorm.io/xorm v1.1.0 h1:mkEsQXLauZajiOld2cB2PkFcUZKePepPgs1bC1dw8RA=
|
||||||
xorm.io/xorm v1.0.7/go.mod h1:uF9EtbhODq5kNWxMbnBEj8hRRZnlcNSz2t2N7HW/+A4=
|
xorm.io/xorm v1.1.0/go.mod h1:EDzNHMuCVZNszkIRSLL2nI0zX+nQE8RstAVranlSfqI=
|
||||||
|
|||||||
@@ -223,7 +223,7 @@ func TestAPIViewRepo(t *testing.T) {
|
|||||||
DecodeJSON(t, resp, &repo)
|
DecodeJSON(t, resp, &repo)
|
||||||
assert.EqualValues(t, 1, repo.ID)
|
assert.EqualValues(t, 1, repo.ID)
|
||||||
assert.EqualValues(t, "repo1", repo.Name)
|
assert.EqualValues(t, "repo1", repo.Name)
|
||||||
assert.EqualValues(t, 2, repo.Releases)
|
assert.EqualValues(t, 1, repo.Releases)
|
||||||
assert.EqualValues(t, 1, repo.OpenIssues)
|
assert.EqualValues(t, 1, repo.OpenIssues)
|
||||||
assert.EqualValues(t, 3, repo.OpenPulls)
|
assert.EqualValues(t, 3, repo.OpenPulls)
|
||||||
|
|
||||||
|
|||||||
@@ -144,7 +144,9 @@ func TestAPITeamSearch(t *testing.T) {
|
|||||||
var results TeamSearchResults
|
var results TeamSearchResults
|
||||||
|
|
||||||
session := loginUser(t, user.Name)
|
session := loginUser(t, user.Name)
|
||||||
|
csrf := GetCSRF(t, session, "/"+org.Name)
|
||||||
req := NewRequestf(t, "GET", "/api/v1/orgs/%s/teams/search?q=%s", org.Name, "_team")
|
req := NewRequestf(t, "GET", "/api/v1/orgs/%s/teams/search?q=%s", org.Name, "_team")
|
||||||
|
req.Header.Add("X-Csrf-Token", csrf)
|
||||||
resp := session.MakeRequest(t, req, http.StatusOK)
|
resp := session.MakeRequest(t, req, http.StatusOK)
|
||||||
DecodeJSON(t, resp, &results)
|
DecodeJSON(t, resp, &results)
|
||||||
assert.NotEmpty(t, results.Data)
|
assert.NotEmpty(t, results.Data)
|
||||||
@@ -154,7 +156,9 @@ func TestAPITeamSearch(t *testing.T) {
|
|||||||
// no access if not organization member
|
// no access if not organization member
|
||||||
user5 := models.AssertExistsAndLoadBean(t, &models.User{ID: 5}).(*models.User)
|
user5 := models.AssertExistsAndLoadBean(t, &models.User{ID: 5}).(*models.User)
|
||||||
session = loginUser(t, user5.Name)
|
session = loginUser(t, user5.Name)
|
||||||
|
csrf = GetCSRF(t, session, "/"+org.Name)
|
||||||
req = NewRequestf(t, "GET", "/api/v1/orgs/%s/teams/search?q=%s", org.Name, "team")
|
req = NewRequestf(t, "GET", "/api/v1/orgs/%s/teams/search?q=%s", org.Name, "team")
|
||||||
|
req.Header.Add("X-Csrf-Token", csrf)
|
||||||
resp = session.MakeRequest(t, req, http.StatusForbidden)
|
resp = session.MakeRequest(t, req, http.StatusForbidden)
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
69
integrations/git_smart_http_test.go
Normal file
69
integrations/git_smart_http_test.go
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
// Copyright 2021 The Gitea Authors. All rights reserved.
|
||||||
|
// Use of this source code is governed by a MIT-style
|
||||||
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
package integrations
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io/ioutil"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestGitSmartHTTP(t *testing.T) {
|
||||||
|
onGiteaRun(t, testGitSmartHTTP)
|
||||||
|
}
|
||||||
|
|
||||||
|
func testGitSmartHTTP(t *testing.T, u *url.URL) {
|
||||||
|
var kases = []struct {
|
||||||
|
p string
|
||||||
|
code int
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
p: "user2/repo1/info/refs",
|
||||||
|
code: 200,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
p: "user2/repo1/HEAD",
|
||||||
|
code: 200,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
p: "user2/repo1/objects/info/alternates",
|
||||||
|
code: 404,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
p: "user2/repo1/objects/info/http-alternates",
|
||||||
|
code: 404,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
p: "user2/repo1/../../custom/conf/app.ini",
|
||||||
|
code: 404,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
p: "user2/repo1/objects/info/../../../../custom/conf/app.ini",
|
||||||
|
code: 404,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
p: `user2/repo1/objects/info/..\..\..\..\custom\conf\app.ini`,
|
||||||
|
code: 400,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, kase := range kases {
|
||||||
|
t.Run(kase.p, func(t *testing.T) {
|
||||||
|
p := u.String() + kase.p
|
||||||
|
req, err := http.NewRequest("GET", p, nil)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
req.SetBasicAuth("user2", userPassword)
|
||||||
|
resp, err := http.DefaultClient.Do(req)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
defer resp.Body.Close()
|
||||||
|
assert.EqualValues(t, kase.code, resp.StatusCode)
|
||||||
|
_, err = ioutil.ReadAll(resp.Body)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
35
integrations/goget_test.go
Normal file
35
integrations/goget_test.go
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
// Copyright 2021 The Gitea Authors. All rights reserved.
|
||||||
|
// Use of this source code is governed by a MIT-style
|
||||||
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
package integrations
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"code.gitea.io/gitea/modules/setting"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestGoGet(t *testing.T) {
|
||||||
|
defer prepareTestEnv(t)()
|
||||||
|
|
||||||
|
req := NewRequest(t, "GET", "/blah/glah/plah?go-get=1")
|
||||||
|
resp := MakeRequest(t, req, http.StatusOK)
|
||||||
|
|
||||||
|
expected := fmt.Sprintf(`<!doctype html>
|
||||||
|
<html>
|
||||||
|
<head>
|
||||||
|
<meta name="go-import" content="%[1]s:%[2]s/blah/glah git %[3]sblah/glah.git">
|
||||||
|
<meta name="go-source" content="%[1]s:%[2]s/blah/glah _ %[3]sblah/glah/src/branch/master{/dir} %[3]sblah/glah/src/branch/master{/dir}/{file}#L{line}">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
go get --insecure %[1]s:%[2]s/blah/glah
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
`, setting.Domain, setting.HTTPPort, setting.AppURL)
|
||||||
|
|
||||||
|
assert.Equal(t, expected, resp.Body.String())
|
||||||
|
}
|
||||||
@@ -141,6 +141,12 @@ func (milestone *Milestone) checkForConsistency(t *testing.T) {
|
|||||||
actual := getCount(t, x.Where("is_closed=?", true), &Issue{MilestoneID: milestone.ID})
|
actual := getCount(t, x.Where("is_closed=?", true), &Issue{MilestoneID: milestone.ID})
|
||||||
assert.EqualValues(t, milestone.NumClosedIssues, actual,
|
assert.EqualValues(t, milestone.NumClosedIssues, actual,
|
||||||
"Unexpected number of closed issues for milestone %+v", milestone)
|
"Unexpected number of closed issues for milestone %+v", milestone)
|
||||||
|
|
||||||
|
completeness := 0
|
||||||
|
if milestone.NumIssues > 0 {
|
||||||
|
completeness = milestone.NumClosedIssues * 100 / milestone.NumIssues
|
||||||
|
}
|
||||||
|
assert.Equal(t, completeness, milestone.Completeness)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (label *Label) checkForConsistency(t *testing.T) {
|
func (label *Label) checkForConsistency(t *testing.T) {
|
||||||
|
|||||||
@@ -648,8 +648,10 @@ func (issue *Issue) doChangeStatus(e *xorm.Session, doer *User, isMergePull bool
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Update issue count of milestone
|
// Update issue count of milestone
|
||||||
if err := updateMilestoneClosedNum(e, issue.MilestoneID); err != nil {
|
if issue.MilestoneID > 0 {
|
||||||
return nil, err
|
if err := updateMilestoneCounters(e, issue.MilestoneID); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := issue.updateClosedNum(e); err != nil {
|
if err := issue.updateClosedNum(e); err != nil {
|
||||||
@@ -912,7 +914,7 @@ func newIssue(e *xorm.Session, doer *User, opts NewIssueOptions) (err error) {
|
|||||||
opts.Issue.Index = inserted.Index
|
opts.Issue.Index = inserted.Index
|
||||||
|
|
||||||
if opts.Issue.MilestoneID > 0 {
|
if opts.Issue.MilestoneID > 0 {
|
||||||
if _, err = e.Exec("UPDATE `milestone` SET num_issues=num_issues+1 WHERE id=?", opts.Issue.MilestoneID); err != nil {
|
if err := updateMilestoneCounters(e, opts.Issue.MilestoneID); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1086,7 +1088,7 @@ func getIssuesByIDs(e Engine, issueIDs []int64) ([]*Issue, error) {
|
|||||||
|
|
||||||
func getIssueIDsByRepoID(e Engine, repoID int64) ([]int64, error) {
|
func getIssueIDsByRepoID(e Engine, repoID int64) ([]int64, error) {
|
||||||
ids := make([]int64, 0, 10)
|
ids := make([]int64, 0, 10)
|
||||||
err := e.Table("issue").Where("repo_id = ?", repoID).Find(&ids)
|
err := e.Table("issue").Cols("id").Where("repo_id = ?", repoID).Find(&ids)
|
||||||
return ids, err
|
return ids, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -129,8 +129,12 @@ func GetMilestoneByRepoIDANDName(repoID int64, name string) (*Milestone, error)
|
|||||||
|
|
||||||
// GetMilestoneByID returns the milestone via id .
|
// GetMilestoneByID returns the milestone via id .
|
||||||
func GetMilestoneByID(id int64) (*Milestone, error) {
|
func GetMilestoneByID(id int64) (*Milestone, error) {
|
||||||
|
return getMilestoneByID(x, id)
|
||||||
|
}
|
||||||
|
|
||||||
|
func getMilestoneByID(e Engine, id int64) (*Milestone, error) {
|
||||||
var m Milestone
|
var m Milestone
|
||||||
has, err := x.ID(id).Get(&m)
|
has, err := e.ID(id).Get(&m)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
} else if !has {
|
} else if !has {
|
||||||
@@ -155,10 +159,6 @@ func UpdateMilestone(m *Milestone, oldIsClosed bool) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := updateMilestoneCompleteness(sess, m.ID); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// if IsClosed changed, update milestone numbers of repository
|
// if IsClosed changed, update milestone numbers of repository
|
||||||
if oldIsClosed != m.IsClosed {
|
if oldIsClosed != m.IsClosed {
|
||||||
if err := updateRepoMilestoneNum(sess, m.RepoID); err != nil {
|
if err := updateRepoMilestoneNum(sess, m.RepoID); err != nil {
|
||||||
@@ -171,23 +171,31 @@ func UpdateMilestone(m *Milestone, oldIsClosed bool) error {
|
|||||||
|
|
||||||
func updateMilestone(e Engine, m *Milestone) error {
|
func updateMilestone(e Engine, m *Milestone) error {
|
||||||
m.Name = strings.TrimSpace(m.Name)
|
m.Name = strings.TrimSpace(m.Name)
|
||||||
_, err := e.ID(m.ID).AllCols().
|
_, err := e.ID(m.ID).AllCols().Update(m)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return updateMilestoneCounters(e, m.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// updateMilestoneCounters calculates NumIssues, NumClosesIssues and Completeness
|
||||||
|
func updateMilestoneCounters(e Engine, id int64) error {
|
||||||
|
_, err := e.ID(id).
|
||||||
SetExpr("num_issues", builder.Select("count(*)").From("issue").Where(
|
SetExpr("num_issues", builder.Select("count(*)").From("issue").Where(
|
||||||
builder.Eq{"milestone_id": m.ID},
|
builder.Eq{"milestone_id": id},
|
||||||
)).
|
)).
|
||||||
SetExpr("num_closed_issues", builder.Select("count(*)").From("issue").Where(
|
SetExpr("num_closed_issues", builder.Select("count(*)").From("issue").Where(
|
||||||
builder.Eq{
|
builder.Eq{
|
||||||
"milestone_id": m.ID,
|
"milestone_id": id,
|
||||||
"is_closed": true,
|
"is_closed": true,
|
||||||
},
|
},
|
||||||
)).
|
)).
|
||||||
Update(m)
|
Update(&Milestone{})
|
||||||
return err
|
if err != nil {
|
||||||
}
|
return err
|
||||||
|
}
|
||||||
func updateMilestoneCompleteness(e Engine, milestoneID int64) error {
|
_, err = e.Exec("UPDATE `milestone` SET completeness=100*num_closed_issues/(CASE WHEN num_issues > 0 THEN num_issues ELSE 1 END) WHERE id=?",
|
||||||
_, err := e.Exec("UPDATE `milestone` SET completeness=100*num_closed_issues/(CASE WHEN num_issues > 0 THEN num_issues ELSE 1 END) WHERE id=?",
|
id,
|
||||||
milestoneID,
|
|
||||||
)
|
)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -256,25 +264,15 @@ func changeMilestoneAssign(e *xorm.Session, doer *User, issue *Issue, oldMilesto
|
|||||||
}
|
}
|
||||||
|
|
||||||
if oldMilestoneID > 0 {
|
if oldMilestoneID > 0 {
|
||||||
if err := updateMilestoneTotalNum(e, oldMilestoneID); err != nil {
|
if err := updateMilestoneCounters(e, oldMilestoneID); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if issue.IsClosed {
|
|
||||||
if err := updateMilestoneClosedNum(e, oldMilestoneID); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if issue.MilestoneID > 0 {
|
if issue.MilestoneID > 0 {
|
||||||
if err := updateMilestoneTotalNum(e, issue.MilestoneID); err != nil {
|
if err := updateMilestoneCounters(e, issue.MilestoneID); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if issue.IsClosed {
|
|
||||||
if err := updateMilestoneClosedNum(e, issue.MilestoneID); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if oldMilestoneID > 0 || issue.MilestoneID > 0 {
|
if oldMilestoneID > 0 || issue.MilestoneID > 0 {
|
||||||
@@ -558,29 +556,6 @@ func updateRepoMilestoneNum(e Engine, repoID int64) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func updateMilestoneTotalNum(e Engine, milestoneID int64) (err error) {
|
|
||||||
if _, err = e.Exec("UPDATE `milestone` SET num_issues=(SELECT count(*) FROM issue WHERE milestone_id=?) WHERE id=?",
|
|
||||||
milestoneID,
|
|
||||||
milestoneID,
|
|
||||||
); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
return updateMilestoneCompleteness(e, milestoneID)
|
|
||||||
}
|
|
||||||
|
|
||||||
func updateMilestoneClosedNum(e Engine, milestoneID int64) (err error) {
|
|
||||||
if _, err = e.Exec("UPDATE `milestone` SET num_closed_issues=(SELECT count(*) FROM issue WHERE milestone_id=? AND is_closed=?) WHERE id=?",
|
|
||||||
milestoneID,
|
|
||||||
true,
|
|
||||||
milestoneID,
|
|
||||||
); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
return updateMilestoneCompleteness(e, milestoneID)
|
|
||||||
}
|
|
||||||
|
|
||||||
// _____ _ _ _____ _
|
// _____ _ _ _____ _
|
||||||
// |_ _| __ __ _ ___| | _____ __| |_ _(_)_ __ ___ ___ ___
|
// |_ _| __ __ _ ___| | _____ __| |_ _(_)_ __ ___ ___ ___
|
||||||
// | || '__/ _` |/ __| |/ / _ \/ _` | | | | | '_ ` _ \ / _ \/ __|
|
// | || '__/ _` |/ __| |/ / _ \/ _` | | | | | '_ ` _ \ / _ \/ __|
|
||||||
|
|||||||
@@ -215,7 +215,7 @@ func TestChangeMilestoneStatus(t *testing.T) {
|
|||||||
CheckConsistencyFor(t, &Repository{ID: milestone.RepoID}, &Milestone{})
|
CheckConsistencyFor(t, &Repository{ID: milestone.RepoID}, &Milestone{})
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestUpdateMilestoneClosedNum(t *testing.T) {
|
func TestUpdateMilestoneCounters(t *testing.T) {
|
||||||
assert.NoError(t, PrepareTestDatabase())
|
assert.NoError(t, PrepareTestDatabase())
|
||||||
issue := AssertExistsAndLoadBean(t, &Issue{MilestoneID: 1},
|
issue := AssertExistsAndLoadBean(t, &Issue{MilestoneID: 1},
|
||||||
"is_closed=0").(*Issue)
|
"is_closed=0").(*Issue)
|
||||||
@@ -224,14 +224,14 @@ func TestUpdateMilestoneClosedNum(t *testing.T) {
|
|||||||
issue.ClosedUnix = timeutil.TimeStampNow()
|
issue.ClosedUnix = timeutil.TimeStampNow()
|
||||||
_, err := x.ID(issue.ID).Cols("is_closed", "closed_unix").Update(issue)
|
_, err := x.ID(issue.ID).Cols("is_closed", "closed_unix").Update(issue)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.NoError(t, updateMilestoneClosedNum(x, issue.MilestoneID))
|
assert.NoError(t, updateMilestoneCounters(x, issue.MilestoneID))
|
||||||
CheckConsistencyFor(t, &Milestone{})
|
CheckConsistencyFor(t, &Milestone{})
|
||||||
|
|
||||||
issue.IsClosed = false
|
issue.IsClosed = false
|
||||||
issue.ClosedUnix = 0
|
issue.ClosedUnix = 0
|
||||||
_, err = x.ID(issue.ID).Cols("is_closed", "closed_unix").Update(issue)
|
_, err = x.ID(issue.ID).Cols("is_closed", "closed_unix").Update(issue)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.NoError(t, updateMilestoneClosedNum(x, issue.MilestoneID))
|
assert.NoError(t, updateMilestoneCounters(x, issue.MilestoneID))
|
||||||
CheckConsistencyFor(t, &Milestone{})
|
CheckConsistencyFor(t, &Milestone{})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -36,6 +36,14 @@ func TestIssue_ReplaceLabels(t *testing.T) {
|
|||||||
testSuccess(1, []int64{})
|
testSuccess(1, []int64{})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func Test_GetIssueIDsByRepoID(t *testing.T) {
|
||||||
|
assert.NoError(t, PrepareTestDatabase())
|
||||||
|
|
||||||
|
ids, err := GetIssueIDsByRepoID(1)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Len(t, ids, 5)
|
||||||
|
}
|
||||||
|
|
||||||
func TestIssueAPIURL(t *testing.T) {
|
func TestIssueAPIURL(t *testing.T) {
|
||||||
assert.NoError(t, PrepareTestDatabase())
|
assert.NoError(t, PrepareTestDatabase())
|
||||||
issue := AssertExistsAndLoadBean(t, &Issue{ID: 1}).(*Issue)
|
issue := AssertExistsAndLoadBean(t, &Issue{ID: 1}).(*Issue)
|
||||||
|
|||||||
@@ -41,7 +41,7 @@ func (opts *ListOptions) setEnginePagination(e Engine) Engine {
|
|||||||
func (opts *ListOptions) GetStartEnd() (start, end int) {
|
func (opts *ListOptions) GetStartEnd() (start, end int) {
|
||||||
opts.setDefaultValues()
|
opts.setDefaultValues()
|
||||||
start = (opts.Page - 1) * opts.PageSize
|
start = (opts.Page - 1) * opts.PageSize
|
||||||
end = start + opts.Page
|
end = start + opts.PageSize
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -21,6 +21,7 @@ import (
|
|||||||
"code.gitea.io/gitea/modules/setting"
|
"code.gitea.io/gitea/modules/setting"
|
||||||
"code.gitea.io/gitea/modules/timeutil"
|
"code.gitea.io/gitea/modules/timeutil"
|
||||||
"code.gitea.io/gitea/modules/util"
|
"code.gitea.io/gitea/modules/util"
|
||||||
|
gouuid "github.com/google/uuid"
|
||||||
jsoniter "github.com/json-iterator/go"
|
jsoniter "github.com/json-iterator/go"
|
||||||
|
|
||||||
"xorm.io/xorm"
|
"xorm.io/xorm"
|
||||||
@@ -68,6 +69,17 @@ var (
|
|||||||
_ convert.Conversion = &SSPIConfig{}
|
_ convert.Conversion = &SSPIConfig{}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// jsonUnmarshalIgnoreErroneousBOM - due to a bug in xorm (see https://gitea.com/xorm/xorm/pulls/1957) - it's
|
||||||
|
// possible that a Blob may gain an unwanted prefix of 0xff 0xfe.
|
||||||
|
func jsonUnmarshalIgnoreErroneousBOM(bs []byte, v interface{}) error {
|
||||||
|
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
||||||
|
err := json.Unmarshal(bs, &v)
|
||||||
|
if err != nil && len(bs) > 2 && bs[0] == 0xff && bs[1] == 0xfe {
|
||||||
|
err = json.Unmarshal(bs[2:], &v)
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
// LDAPConfig holds configuration for LDAP login source.
|
// LDAPConfig holds configuration for LDAP login source.
|
||||||
type LDAPConfig struct {
|
type LDAPConfig struct {
|
||||||
*ldap.Source
|
*ldap.Source
|
||||||
@@ -75,8 +87,7 @@ type LDAPConfig struct {
|
|||||||
|
|
||||||
// FromDB fills up a LDAPConfig from serialized format.
|
// FromDB fills up a LDAPConfig from serialized format.
|
||||||
func (cfg *LDAPConfig) FromDB(bs []byte) error {
|
func (cfg *LDAPConfig) FromDB(bs []byte) error {
|
||||||
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
return jsonUnmarshalIgnoreErroneousBOM(bs, &cfg)
|
||||||
return json.Unmarshal(bs, &cfg)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ToDB exports a LDAPConfig to a serialized format.
|
// ToDB exports a LDAPConfig to a serialized format.
|
||||||
@@ -103,8 +114,7 @@ type SMTPConfig struct {
|
|||||||
|
|
||||||
// FromDB fills up an SMTPConfig from serialized format.
|
// FromDB fills up an SMTPConfig from serialized format.
|
||||||
func (cfg *SMTPConfig) FromDB(bs []byte) error {
|
func (cfg *SMTPConfig) FromDB(bs []byte) error {
|
||||||
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
return jsonUnmarshalIgnoreErroneousBOM(bs, cfg)
|
||||||
return json.Unmarshal(bs, cfg)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ToDB exports an SMTPConfig to a serialized format.
|
// ToDB exports an SMTPConfig to a serialized format.
|
||||||
@@ -116,12 +126,12 @@ func (cfg *SMTPConfig) ToDB() ([]byte, error) {
|
|||||||
// PAMConfig holds configuration for the PAM login source.
|
// PAMConfig holds configuration for the PAM login source.
|
||||||
type PAMConfig struct {
|
type PAMConfig struct {
|
||||||
ServiceName string // pam service (e.g. system-auth)
|
ServiceName string // pam service (e.g. system-auth)
|
||||||
|
EmailDomain string
|
||||||
}
|
}
|
||||||
|
|
||||||
// FromDB fills up a PAMConfig from serialized format.
|
// FromDB fills up a PAMConfig from serialized format.
|
||||||
func (cfg *PAMConfig) FromDB(bs []byte) error {
|
func (cfg *PAMConfig) FromDB(bs []byte) error {
|
||||||
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
return jsonUnmarshalIgnoreErroneousBOM(bs, cfg)
|
||||||
return json.Unmarshal(bs, &cfg)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ToDB exports a PAMConfig to a serialized format.
|
// ToDB exports a PAMConfig to a serialized format.
|
||||||
@@ -142,8 +152,7 @@ type OAuth2Config struct {
|
|||||||
|
|
||||||
// FromDB fills up an OAuth2Config from serialized format.
|
// FromDB fills up an OAuth2Config from serialized format.
|
||||||
func (cfg *OAuth2Config) FromDB(bs []byte) error {
|
func (cfg *OAuth2Config) FromDB(bs []byte) error {
|
||||||
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
return jsonUnmarshalIgnoreErroneousBOM(bs, cfg)
|
||||||
return json.Unmarshal(bs, cfg)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ToDB exports an SMTPConfig to a serialized format.
|
// ToDB exports an SMTPConfig to a serialized format.
|
||||||
@@ -163,8 +172,7 @@ type SSPIConfig struct {
|
|||||||
|
|
||||||
// FromDB fills up an SSPIConfig from serialized format.
|
// FromDB fills up an SSPIConfig from serialized format.
|
||||||
func (cfg *SSPIConfig) FromDB(bs []byte) error {
|
func (cfg *SSPIConfig) FromDB(bs []byte) error {
|
||||||
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
return jsonUnmarshalIgnoreErroneousBOM(bs, cfg)
|
||||||
return json.Unmarshal(bs, cfg)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ToDB exports an SSPIConfig to a serialized format.
|
// ToDB exports an SSPIConfig to a serialized format.
|
||||||
@@ -696,15 +704,26 @@ func LoginViaPAM(user *User, login, password string, sourceID int64, cfg *PAMCon
|
|||||||
|
|
||||||
// Allow PAM sources with `@` in their name, like from Active Directory
|
// Allow PAM sources with `@` in their name, like from Active Directory
|
||||||
username := pamLogin
|
username := pamLogin
|
||||||
|
email := pamLogin
|
||||||
idx := strings.Index(pamLogin, "@")
|
idx := strings.Index(pamLogin, "@")
|
||||||
if idx > -1 {
|
if idx > -1 {
|
||||||
username = pamLogin[:idx]
|
username = pamLogin[:idx]
|
||||||
}
|
}
|
||||||
|
if ValidateEmail(email) != nil {
|
||||||
|
if cfg.EmailDomain != "" {
|
||||||
|
email = fmt.Sprintf("%s@%s", username, cfg.EmailDomain)
|
||||||
|
} else {
|
||||||
|
email = fmt.Sprintf("%s@%s", username, setting.Service.NoReplyAddress)
|
||||||
|
}
|
||||||
|
if ValidateEmail(email) != nil {
|
||||||
|
email = gouuid.New().String() + "@localhost"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
user = &User{
|
user = &User{
|
||||||
LowerName: strings.ToLower(username),
|
LowerName: strings.ToLower(username),
|
||||||
Name: username,
|
Name: username,
|
||||||
Email: pamLogin,
|
Email: email,
|
||||||
Passwd: password,
|
Passwd: password,
|
||||||
LoginType: LoginPAM,
|
LoginType: LoginPAM,
|
||||||
LoginSource: sourceID,
|
LoginSource: sourceID,
|
||||||
|
|||||||
@@ -88,7 +88,7 @@ func fixPublisherIDforTagReleases(x *xorm.Engine) error {
|
|||||||
repo = new(Repository)
|
repo = new(Repository)
|
||||||
has, err := sess.ID(release.RepoID).Get(repo)
|
has, err := sess.ID(release.RepoID).Get(repo)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error("Error whilst loading repository[%d] for release[%d] with tag name %s", release.RepoID, release.ID, release.TagName)
|
log.Error("Error whilst loading repository[%d] for release[%d] with tag name %s. Error: %v", release.RepoID, release.ID, release.TagName, err)
|
||||||
return err
|
return err
|
||||||
} else if !has {
|
} else if !has {
|
||||||
log.Warn("Release[%d] is orphaned and refers to non-existing repository %d", release.ID, release.RepoID)
|
log.Warn("Release[%d] is orphaned and refers to non-existing repository %d", release.ID, release.RepoID)
|
||||||
@@ -105,13 +105,13 @@ func fixPublisherIDforTagReleases(x *xorm.Engine) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if _, err := sess.ID(release.RepoID).Get(repo); err != nil {
|
if _, err := sess.ID(release.RepoID).Get(repo); err != nil {
|
||||||
log.Error("Error whilst loading repository[%d] for release[%d] with tag name %s", release.RepoID, release.ID, release.TagName)
|
log.Error("Error whilst loading repository[%d] for release[%d] with tag name %s. Error: %v", release.RepoID, release.ID, release.TagName, err)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
gitRepo, err = git.OpenRepository(repoPath(repo.OwnerName, repo.Name))
|
gitRepo, err = git.OpenRepository(repoPath(repo.OwnerName, repo.Name))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error("Error whilst opening git repo for %-v", repo)
|
log.Error("Error whilst opening git repo for [%d]%s/%s. Error: %v", repo.ID, repo.OwnerName, repo.Name, err)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -119,18 +119,36 @@ func fixPublisherIDforTagReleases(x *xorm.Engine) error {
|
|||||||
commit, err := gitRepo.GetTagCommit(release.TagName)
|
commit, err := gitRepo.GetTagCommit(release.TagName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if git.IsErrNotExist(err) {
|
if git.IsErrNotExist(err) {
|
||||||
log.Warn("Unable to find commit %s for Tag: %s in %-v. Cannot update publisher ID.", err.(git.ErrNotExist).ID, release.TagName, repo)
|
log.Warn("Unable to find commit %s for Tag: %s in [%d]%s/%s. Cannot update publisher ID.", err.(git.ErrNotExist).ID, release.TagName, repo.ID, repo.OwnerName, repo.Name)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
log.Error("Error whilst getting commit for Tag: %s in %-v.", release.TagName, repo)
|
log.Error("Error whilst getting commit for Tag: %s in [%d]%s/%s. Error: %v", release.TagName, repo.ID, repo.OwnerName, repo.Name, err)
|
||||||
return fmt.Errorf("GetTagCommit: %v", err)
|
return fmt.Errorf("GetTagCommit: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if commit.Author.Email == "" {
|
||||||
|
log.Warn("Tag: %s in Repo[%d]%s/%s does not have a tagger.", release.TagName, repo.ID, repo.OwnerName, repo.Name)
|
||||||
|
commit, err = gitRepo.GetCommit(commit.ID.String())
|
||||||
|
if err != nil {
|
||||||
|
if git.IsErrNotExist(err) {
|
||||||
|
log.Warn("Unable to find commit %s for Tag: %s in [%d]%s/%s. Cannot update publisher ID.", err.(git.ErrNotExist).ID, release.TagName, repo.ID, repo.OwnerName, repo.Name)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
log.Error("Error whilst getting commit for Tag: %s in [%d]%s/%s. Error: %v", release.TagName, repo.ID, repo.OwnerName, repo.Name, err)
|
||||||
|
return fmt.Errorf("GetCommit: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if commit.Author.Email == "" {
|
||||||
|
log.Warn("Tag: %s in Repo[%d]%s/%s does not have a Tagger and its underlying commit does not have an Author either!", release.TagName, repo.ID, repo.OwnerName, repo.Name)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
if user == nil || !strings.EqualFold(user.Email, commit.Author.Email) {
|
if user == nil || !strings.EqualFold(user.Email, commit.Author.Email) {
|
||||||
user = new(User)
|
user = new(User)
|
||||||
_, err = sess.Where("email=?", commit.Author.Email).Get(user)
|
_, err = sess.Where("email=?", commit.Author.Email).Get(user)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error("Error whilst getting commit author by email: %s for Tag: %s in %-v.", commit.Author.Email, release.TagName, repo)
|
log.Error("Error whilst getting commit author by email: %s for Tag: %s in [%d]%s/%s. Error: %v", commit.Author.Email, release.TagName, repo.ID, repo.OwnerName, repo.Name, err)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -143,7 +161,7 @@ func fixPublisherIDforTagReleases(x *xorm.Engine) error {
|
|||||||
|
|
||||||
release.PublisherID = user.ID
|
release.PublisherID = user.ID
|
||||||
if _, err := sess.ID(release.ID).Cols("publisher_id").Update(release); err != nil {
|
if _, err := sess.ID(release.ID).Cols("publisher_id").Update(release); err != nil {
|
||||||
log.Error("Error whilst updating publisher[%d] for release[%d] with tag name %s", release.PublisherID, release.ID, release.TagName)
|
log.Error("Error whilst updating publisher[%d] for release[%d] with tag name %s. Error: %v", release.PublisherID, release.ID, release.TagName, err)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1349,6 +1349,26 @@ func UpdateRepository(repo *Repository, visibilityChanged bool) (err error) {
|
|||||||
return sess.Commit()
|
return sess.Commit()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// UpdateRepositoryOwnerNames updates repository owner_names (this should only be used when the ownerName has changed case)
|
||||||
|
func UpdateRepositoryOwnerNames(ownerID int64, ownerName string) error {
|
||||||
|
if ownerID == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
sess := x.NewSession()
|
||||||
|
defer sess.Close()
|
||||||
|
if err := sess.Begin(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := sess.Where("owner_id = ?", ownerID).Cols("owner_name").Update(&Repository{
|
||||||
|
OwnerName: ownerName,
|
||||||
|
}); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return sess.Commit()
|
||||||
|
}
|
||||||
|
|
||||||
// UpdateRepositoryUpdatedTime updates a repository's updated time
|
// UpdateRepositoryUpdatedTime updates a repository's updated time
|
||||||
func UpdateRepositoryUpdatedTime(repoID int64, updateTime time.Time) error {
|
func UpdateRepositoryUpdatedTime(repoID int64, updateTime time.Time) error {
|
||||||
_, err := x.Exec("UPDATE repository SET updated_unix = ? WHERE id = ?", updateTime.Unix(), repoID)
|
_, err := x.Exec("UPDATE repository SET updated_unix = ? WHERE id = ?", updateTime.Unix(), repoID)
|
||||||
|
|||||||
@@ -28,8 +28,7 @@ type UnitConfig struct{}
|
|||||||
|
|
||||||
// FromDB fills up a UnitConfig from serialized format.
|
// FromDB fills up a UnitConfig from serialized format.
|
||||||
func (cfg *UnitConfig) FromDB(bs []byte) error {
|
func (cfg *UnitConfig) FromDB(bs []byte) error {
|
||||||
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
return jsonUnmarshalIgnoreErroneousBOM(bs, &cfg)
|
||||||
return json.Unmarshal(bs, &cfg)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ToDB exports a UnitConfig to a serialized format.
|
// ToDB exports a UnitConfig to a serialized format.
|
||||||
@@ -45,8 +44,7 @@ type ExternalWikiConfig struct {
|
|||||||
|
|
||||||
// FromDB fills up a ExternalWikiConfig from serialized format.
|
// FromDB fills up a ExternalWikiConfig from serialized format.
|
||||||
func (cfg *ExternalWikiConfig) FromDB(bs []byte) error {
|
func (cfg *ExternalWikiConfig) FromDB(bs []byte) error {
|
||||||
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
return jsonUnmarshalIgnoreErroneousBOM(bs, &cfg)
|
||||||
return json.Unmarshal(bs, &cfg)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ToDB exports a ExternalWikiConfig to a serialized format.
|
// ToDB exports a ExternalWikiConfig to a serialized format.
|
||||||
@@ -64,8 +62,7 @@ type ExternalTrackerConfig struct {
|
|||||||
|
|
||||||
// FromDB fills up a ExternalTrackerConfig from serialized format.
|
// FromDB fills up a ExternalTrackerConfig from serialized format.
|
||||||
func (cfg *ExternalTrackerConfig) FromDB(bs []byte) error {
|
func (cfg *ExternalTrackerConfig) FromDB(bs []byte) error {
|
||||||
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
return jsonUnmarshalIgnoreErroneousBOM(bs, &cfg)
|
||||||
return json.Unmarshal(bs, &cfg)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ToDB exports a ExternalTrackerConfig to a serialized format.
|
// ToDB exports a ExternalTrackerConfig to a serialized format.
|
||||||
@@ -83,8 +80,7 @@ type IssuesConfig struct {
|
|||||||
|
|
||||||
// FromDB fills up a IssuesConfig from serialized format.
|
// FromDB fills up a IssuesConfig from serialized format.
|
||||||
func (cfg *IssuesConfig) FromDB(bs []byte) error {
|
func (cfg *IssuesConfig) FromDB(bs []byte) error {
|
||||||
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
return jsonUnmarshalIgnoreErroneousBOM(bs, &cfg)
|
||||||
return json.Unmarshal(bs, &cfg)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ToDB exports a IssuesConfig to a serialized format.
|
// ToDB exports a IssuesConfig to a serialized format.
|
||||||
@@ -106,8 +102,7 @@ type PullRequestsConfig struct {
|
|||||||
|
|
||||||
// FromDB fills up a PullRequestsConfig from serialized format.
|
// FromDB fills up a PullRequestsConfig from serialized format.
|
||||||
func (cfg *PullRequestsConfig) FromDB(bs []byte) error {
|
func (cfg *PullRequestsConfig) FromDB(bs []byte) error {
|
||||||
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
return jsonUnmarshalIgnoreErroneousBOM(bs, &cfg)
|
||||||
return json.Unmarshal(bs, &cfg)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ToDB exports a PullRequestsConfig to a serialized format.
|
// ToDB exports a PullRequestsConfig to a serialized format.
|
||||||
|
|||||||
@@ -8,8 +8,11 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
migration "code.gitea.io/gitea/modules/migrations/base"
|
migration "code.gitea.io/gitea/modules/migrations/base"
|
||||||
|
"code.gitea.io/gitea/modules/secret"
|
||||||
|
"code.gitea.io/gitea/modules/setting"
|
||||||
"code.gitea.io/gitea/modules/structs"
|
"code.gitea.io/gitea/modules/structs"
|
||||||
"code.gitea.io/gitea/modules/timeutil"
|
"code.gitea.io/gitea/modules/timeutil"
|
||||||
|
"code.gitea.io/gitea/modules/util"
|
||||||
jsoniter "github.com/json-iterator/go"
|
jsoniter "github.com/json-iterator/go"
|
||||||
|
|
||||||
"xorm.io/builder"
|
"xorm.io/builder"
|
||||||
@@ -110,6 +113,24 @@ func (task *Task) MigrateConfig() (*migration.MigrateOptions, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// decrypt credentials
|
||||||
|
if opts.CloneAddrEncrypted != "" {
|
||||||
|
if opts.CloneAddr, err = secret.DecryptSecret(setting.SecretKey, opts.CloneAddrEncrypted); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if opts.AuthPasswordEncrypted != "" {
|
||||||
|
if opts.AuthPassword, err = secret.DecryptSecret(setting.SecretKey, opts.AuthPasswordEncrypted); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if opts.AuthTokenEncrypted != "" {
|
||||||
|
if opts.AuthToken, err = secret.DecryptSecret(setting.SecretKey, opts.AuthTokenEncrypted); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return &opts, nil
|
return &opts, nil
|
||||||
}
|
}
|
||||||
return nil, fmt.Errorf("Task type is %s, not Migrate Repo", task.Type.Name())
|
return nil, fmt.Errorf("Task type is %s, not Migrate Repo", task.Type.Name())
|
||||||
@@ -205,12 +226,31 @@ func createTask(e Engine, task *Task) error {
|
|||||||
func FinishMigrateTask(task *Task) error {
|
func FinishMigrateTask(task *Task) error {
|
||||||
task.Status = structs.TaskStatusFinished
|
task.Status = structs.TaskStatusFinished
|
||||||
task.EndTime = timeutil.TimeStampNow()
|
task.EndTime = timeutil.TimeStampNow()
|
||||||
|
|
||||||
|
// delete credentials when we're done, they're a liability.
|
||||||
|
conf, err := task.MigrateConfig()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
conf.AuthPassword = ""
|
||||||
|
conf.AuthToken = ""
|
||||||
|
conf.CloneAddr = util.SanitizeURLCredentials(conf.CloneAddr, true)
|
||||||
|
conf.AuthPasswordEncrypted = ""
|
||||||
|
conf.AuthTokenEncrypted = ""
|
||||||
|
conf.CloneAddrEncrypted = ""
|
||||||
|
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
||||||
|
confBytes, err := json.Marshal(conf)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
task.PayloadContent = string(confBytes)
|
||||||
|
|
||||||
sess := x.NewSession()
|
sess := x.NewSession()
|
||||||
defer sess.Close()
|
defer sess.Close()
|
||||||
if err := sess.Begin(); err != nil {
|
if err := sess.Begin(); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if _, err := sess.ID(task.ID).Cols("status", "end_time").Update(task); err != nil {
|
if _, err := sess.ID(task.ID).Cols("status", "end_time", "payload_content").Update(task); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -57,9 +57,15 @@ func GetAccessTokenBySHA(token string) (*AccessToken, error) {
|
|||||||
if token == "" {
|
if token == "" {
|
||||||
return nil, ErrAccessTokenEmpty{}
|
return nil, ErrAccessTokenEmpty{}
|
||||||
}
|
}
|
||||||
if len(token) < 8 {
|
// A token is defined as being SHA1 sum these are 40 hexadecimal bytes long
|
||||||
|
if len(token) != 40 {
|
||||||
return nil, ErrAccessTokenNotExist{token}
|
return nil, ErrAccessTokenNotExist{token}
|
||||||
}
|
}
|
||||||
|
for _, x := range []byte(token) {
|
||||||
|
if x < '0' || (x > '9' && x < 'a') || x > 'f' {
|
||||||
|
return nil, ErrAccessTokenNotExist{token}
|
||||||
|
}
|
||||||
|
}
|
||||||
var tokens []AccessToken
|
var tokens []AccessToken
|
||||||
lastEight := token[len(token)-8:]
|
lastEight := token[len(token)-8:]
|
||||||
err := x.Table(&AccessToken{}).Where("token_last_eight = ?", lastEight).Find(&tokens)
|
err := x.Table(&AccessToken{}).Where("token_last_eight = ?", lastEight).Find(&tokens)
|
||||||
|
|||||||
@@ -21,6 +21,7 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
"unicode"
|
"unicode"
|
||||||
|
"unicode/utf8"
|
||||||
|
|
||||||
"code.gitea.io/gitea/modules/git"
|
"code.gitea.io/gitea/modules/git"
|
||||||
"code.gitea.io/gitea/modules/log"
|
"code.gitea.io/gitea/modules/log"
|
||||||
@@ -213,19 +214,19 @@ func EllipsisString(str string, length int) string {
|
|||||||
if length <= 3 {
|
if length <= 3 {
|
||||||
return "..."
|
return "..."
|
||||||
}
|
}
|
||||||
if len(str) <= length {
|
if utf8.RuneCountInString(str) <= length {
|
||||||
return str
|
return str
|
||||||
}
|
}
|
||||||
return str[:length-3] + "..."
|
return string([]rune(str)[:length-3]) + "..."
|
||||||
}
|
}
|
||||||
|
|
||||||
// TruncateString returns a truncated string with given limit,
|
// TruncateString returns a truncated string with given limit,
|
||||||
// it returns input string if length is not reached limit.
|
// it returns input string if length is not reached limit.
|
||||||
func TruncateString(str string, limit int) string {
|
func TruncateString(str string, limit int) string {
|
||||||
if len(str) < limit {
|
if utf8.RuneCountInString(str) < limit {
|
||||||
return str
|
return str
|
||||||
}
|
}
|
||||||
return str[:limit]
|
return string([]rune(str)[:limit])
|
||||||
}
|
}
|
||||||
|
|
||||||
// StringsToInt64s converts a slice of string to a slice of int64.
|
// StringsToInt64s converts a slice of string to a slice of int64.
|
||||||
|
|||||||
@@ -170,6 +170,10 @@ func TestEllipsisString(t *testing.T) {
|
|||||||
assert.Equal(t, "fo...", EllipsisString("foobar", 5))
|
assert.Equal(t, "fo...", EllipsisString("foobar", 5))
|
||||||
assert.Equal(t, "foobar", EllipsisString("foobar", 6))
|
assert.Equal(t, "foobar", EllipsisString("foobar", 6))
|
||||||
assert.Equal(t, "foobar", EllipsisString("foobar", 10))
|
assert.Equal(t, "foobar", EllipsisString("foobar", 10))
|
||||||
|
assert.Equal(t, "测...", EllipsisString("测试文本一二三四", 4))
|
||||||
|
assert.Equal(t, "测试...", EllipsisString("测试文本一二三四", 5))
|
||||||
|
assert.Equal(t, "测试文...", EllipsisString("测试文本一二三四", 6))
|
||||||
|
assert.Equal(t, "测试文本一二三四", EllipsisString("测试文本一二三四", 10))
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestTruncateString(t *testing.T) {
|
func TestTruncateString(t *testing.T) {
|
||||||
@@ -181,6 +185,10 @@ func TestTruncateString(t *testing.T) {
|
|||||||
assert.Equal(t, "fooba", TruncateString("foobar", 5))
|
assert.Equal(t, "fooba", TruncateString("foobar", 5))
|
||||||
assert.Equal(t, "foobar", TruncateString("foobar", 6))
|
assert.Equal(t, "foobar", TruncateString("foobar", 6))
|
||||||
assert.Equal(t, "foobar", TruncateString("foobar", 7))
|
assert.Equal(t, "foobar", TruncateString("foobar", 7))
|
||||||
|
assert.Equal(t, "测试文本", TruncateString("测试文本一二三四", 4))
|
||||||
|
assert.Equal(t, "测试文本一", TruncateString("测试文本一二三四", 5))
|
||||||
|
assert.Equal(t, "测试文本一二", TruncateString("测试文本一二三四", 6))
|
||||||
|
assert.Equal(t, "测试文本一二三", TruncateString("测试文本一二三四", 7))
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestStringsToInt64s(t *testing.T) {
|
func TestStringsToInt64s(t *testing.T) {
|
||||||
|
|||||||
@@ -49,7 +49,7 @@ func (r *Response) Write(bs []byte) (int, error) {
|
|||||||
return size, err
|
return size, err
|
||||||
}
|
}
|
||||||
if r.status == 0 {
|
if r.status == 0 {
|
||||||
r.WriteHeader(200)
|
r.status = http.StatusOK
|
||||||
}
|
}
|
||||||
return size, nil
|
return size, nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -155,8 +155,8 @@ func ToCommit(repo *models.Repository, commit *git.Commit, userCache map[string]
|
|||||||
URL: repo.APIURL() + "/git/commits/" + commit.ID.String(),
|
URL: repo.APIURL() + "/git/commits/" + commit.ID.String(),
|
||||||
Author: &api.CommitUser{
|
Author: &api.CommitUser{
|
||||||
Identity: api.Identity{
|
Identity: api.Identity{
|
||||||
Name: commit.Committer.Name,
|
Name: commit.Author.Name,
|
||||||
Email: commit.Committer.Email,
|
Email: commit.Author.Email,
|
||||||
},
|
},
|
||||||
Date: commit.Author.When.Format(time.RFC3339),
|
Date: commit.Author.When.Format(time.RFC3339),
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -89,7 +89,7 @@ func innerToRepo(repo *models.Repository, mode models.AccessMode, isParent bool)
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
numReleases, _ := models.GetReleaseCountByRepoID(repo.ID, models.FindReleasesOptions{IncludeDrafts: false, IncludeTags: true})
|
numReleases, _ := models.GetReleaseCountByRepoID(repo.ID, models.FindReleasesOptions{IncludeDrafts: false, IncludeTags: false})
|
||||||
|
|
||||||
mirrorInterval := ""
|
mirrorInterval := ""
|
||||||
if repo.IsMirror {
|
if repo.IsMirror {
|
||||||
|
|||||||
@@ -23,7 +23,7 @@ func checkDBVersion(logger log.Logger, autofix bool) error {
|
|||||||
|
|
||||||
err = models.NewEngine(context.Background(), migrations.Migrate)
|
err = models.NewEngine(context.Background(), migrations.Migrate)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
logger.Critical("Error: %v during migration")
|
logger.Critical("Error: %v during migration", err)
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,6 +6,7 @@
|
|||||||
package emoji
|
package emoji
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"io"
|
||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
@@ -145,6 +146,8 @@ func (n *rememberSecondWriteWriter) Write(p []byte) (int, error) {
|
|||||||
if n.writecount == 2 {
|
if n.writecount == 2 {
|
||||||
n.idx = n.pos
|
n.idx = n.pos
|
||||||
n.end = n.pos + len(p)
|
n.end = n.pos + len(p)
|
||||||
|
n.pos += len(p)
|
||||||
|
return len(p), io.EOF
|
||||||
}
|
}
|
||||||
n.pos += len(p)
|
n.pos += len(p)
|
||||||
return len(p), nil
|
return len(p), nil
|
||||||
@@ -155,6 +158,8 @@ func (n *rememberSecondWriteWriter) WriteString(s string) (int, error) {
|
|||||||
if n.writecount == 2 {
|
if n.writecount == 2 {
|
||||||
n.idx = n.pos
|
n.idx = n.pos
|
||||||
n.end = n.pos + len(s)
|
n.end = n.pos + len(s)
|
||||||
|
n.pos += len(s)
|
||||||
|
return len(s), io.EOF
|
||||||
}
|
}
|
||||||
n.pos += len(s)
|
n.pos += len(s)
|
||||||
return len(s), nil
|
return len(s), nil
|
||||||
|
|||||||
@@ -51,6 +51,7 @@ type AuthenticationForm struct {
|
|||||||
TLS bool
|
TLS bool
|
||||||
SkipVerify bool
|
SkipVerify bool
|
||||||
PAMServiceName string
|
PAMServiceName string
|
||||||
|
PAMEmailDomain string
|
||||||
Oauth2Provider string
|
Oauth2Provider string
|
||||||
Oauth2Key string
|
Oauth2Key string
|
||||||
Oauth2Secret string
|
Oauth2Secret string
|
||||||
|
|||||||
@@ -149,17 +149,18 @@ headerLoop:
|
|||||||
// constant hextable to help quickly convert between 20byte and 40byte hashes
|
// constant hextable to help quickly convert between 20byte and 40byte hashes
|
||||||
const hextable = "0123456789abcdef"
|
const hextable = "0123456789abcdef"
|
||||||
|
|
||||||
// To40ByteSHA converts a 20-byte SHA in a 40-byte slice into a 40-byte sha in place
|
// To40ByteSHA converts a 20-byte SHA into a 40-byte sha. Input and output can be the
|
||||||
// without allocations. This is at least 100x quicker that hex.EncodeToString
|
// same 40 byte slice to support in place conversion without allocations.
|
||||||
// NB This requires that sha is a 40-byte slice
|
// This is at least 100x quicker that hex.EncodeToString
|
||||||
func To40ByteSHA(sha []byte) []byte {
|
// NB This requires that out is a 40-byte slice
|
||||||
|
func To40ByteSHA(sha, out []byte) []byte {
|
||||||
for i := 19; i >= 0; i-- {
|
for i := 19; i >= 0; i-- {
|
||||||
v := sha[i]
|
v := sha[i]
|
||||||
vhi, vlo := v>>4, v&0x0f
|
vhi, vlo := v>>4, v&0x0f
|
||||||
shi, slo := hextable[vhi], hextable[vlo]
|
shi, slo := hextable[vhi], hextable[vlo]
|
||||||
sha[i*2], sha[i*2+1] = shi, slo
|
out[i*2], out[i*2+1] = shi, slo
|
||||||
}
|
}
|
||||||
return sha
|
return out
|
||||||
}
|
}
|
||||||
|
|
||||||
// ParseTreeLineSkipMode reads an entry from a tree in a cat-file --batch stream
|
// ParseTreeLineSkipMode reads an entry from a tree in a cat-file --batch stream
|
||||||
|
|||||||
@@ -124,12 +124,18 @@ func (c *Command) RunInDirTimeoutEnvFullPipelineFunc(env []string, timeout time.
|
|||||||
|
|
||||||
cmd := exec.CommandContext(ctx, c.name, c.args...)
|
cmd := exec.CommandContext(ctx, c.name, c.args...)
|
||||||
if env == nil {
|
if env == nil {
|
||||||
cmd.Env = append(os.Environ(), fmt.Sprintf("LC_ALL=%s", DefaultLocale))
|
cmd.Env = os.Environ()
|
||||||
} else {
|
} else {
|
||||||
cmd.Env = env
|
cmd.Env = env
|
||||||
cmd.Env = append(cmd.Env, fmt.Sprintf("LC_ALL=%s", DefaultLocale))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
cmd.Env = append(
|
||||||
|
cmd.Env,
|
||||||
|
fmt.Sprintf("LC_ALL=%s", DefaultLocale),
|
||||||
|
// avoid prompting for credentials interactively, supported since git v2.3
|
||||||
|
"GIT_TERMINAL_PROMPT=0",
|
||||||
|
)
|
||||||
|
|
||||||
// TODO: verify if this is still needed in golang 1.15
|
// TODO: verify if this is still needed in golang 1.15
|
||||||
if goVersionLessThan115 {
|
if goVersionLessThan115 {
|
||||||
cmd.Env = append(cmd.Env, "GODEBUG=asyncpreemptoff=1")
|
cmd.Env = append(cmd.Env, "GODEBUG=asyncpreemptoff=1")
|
||||||
|
|||||||
@@ -303,7 +303,7 @@ revListLoop:
|
|||||||
commits[0] = string(commitID)
|
commits[0] = string(commitID)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
treeID = To40ByteSHA(treeID)
|
treeID = To40ByteSHA(treeID, treeID)
|
||||||
_, err = batchStdinWriter.Write(treeID)
|
_, err = batchStdinWriter.Write(treeID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
|||||||
@@ -17,7 +17,9 @@ import (
|
|||||||
// If used as part of a cat-file --batch stream you need to limit the reader to the correct size
|
// If used as part of a cat-file --batch stream you need to limit the reader to the correct size
|
||||||
func CommitFromReader(gitRepo *Repository, sha SHA1, reader io.Reader) (*Commit, error) {
|
func CommitFromReader(gitRepo *Repository, sha SHA1, reader io.Reader) (*Commit, error) {
|
||||||
commit := &Commit{
|
commit := &Commit{
|
||||||
ID: sha,
|
ID: sha,
|
||||||
|
Author: &Signature{},
|
||||||
|
Committer: &Signature{},
|
||||||
}
|
}
|
||||||
|
|
||||||
payloadSB := new(strings.Builder)
|
payloadSB := new(strings.Builder)
|
||||||
|
|||||||
@@ -43,8 +43,6 @@ func FindLFSFile(repo *git.Repository, hash git.SHA1) ([]*LFSResult, error) {
|
|||||||
|
|
||||||
basePath := repo.Path
|
basePath := repo.Path
|
||||||
|
|
||||||
hashStr := hash.String()
|
|
||||||
|
|
||||||
// Use rev-list to provide us with all commits in order
|
// Use rev-list to provide us with all commits in order
|
||||||
revListReader, revListWriter := io.Pipe()
|
revListReader, revListWriter := io.Pipe()
|
||||||
defer func() {
|
defer func() {
|
||||||
@@ -74,7 +72,7 @@ func FindLFSFile(repo *git.Repository, hash git.SHA1) ([]*LFSResult, error) {
|
|||||||
|
|
||||||
fnameBuf := make([]byte, 4096)
|
fnameBuf := make([]byte, 4096)
|
||||||
modeBuf := make([]byte, 40)
|
modeBuf := make([]byte, 40)
|
||||||
workingShaBuf := make([]byte, 40)
|
workingShaBuf := make([]byte, 20)
|
||||||
|
|
||||||
for scan.Scan() {
|
for scan.Scan() {
|
||||||
// Get the next commit ID
|
// Get the next commit ID
|
||||||
@@ -132,8 +130,7 @@ func FindLFSFile(repo *git.Repository, hash git.SHA1) ([]*LFSResult, error) {
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
n += int64(count)
|
n += int64(count)
|
||||||
sha := git.To40ByteSHA(sha20byte)
|
if bytes.Equal(sha20byte, hash[:]) {
|
||||||
if bytes.Equal(sha, []byte(hashStr)) {
|
|
||||||
result := LFSResult{
|
result := LFSResult{
|
||||||
Name: curPath + string(fname),
|
Name: curPath + string(fname),
|
||||||
SHA: curCommit.ID.String(),
|
SHA: curCommit.ID.String(),
|
||||||
@@ -143,7 +140,9 @@ func FindLFSFile(repo *git.Repository, hash git.SHA1) ([]*LFSResult, error) {
|
|||||||
}
|
}
|
||||||
resultsMap[curCommit.ID.String()+":"+curPath+string(fname)] = &result
|
resultsMap[curCommit.ID.String()+":"+curPath+string(fname)] = &result
|
||||||
} else if string(mode) == git.EntryModeTree.String() {
|
} else if string(mode) == git.EntryModeTree.String() {
|
||||||
trees = append(trees, sha)
|
sha40Byte := make([]byte, 40)
|
||||||
|
git.To40ByteSHA(sha20byte, sha40Byte)
|
||||||
|
trees = append(trees, sha40Byte)
|
||||||
paths = append(paths, curPath+string(fname)+"/")
|
paths = append(paths, curPath+string(fname)+"/")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,23 +7,18 @@ package git
|
|||||||
import (
|
import (
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestGetLatestCommitTime(t *testing.T) {
|
func TestGetLatestCommitTime(t *testing.T) {
|
||||||
lct, err := GetLatestCommitTime(".")
|
bareRepo1Path := filepath.Join(testReposDir, "repo1_bare")
|
||||||
|
lct, err := GetLatestCommitTime(bareRepo1Path)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Time is in the past
|
// Time is Sun Jul 21 22:43:13 2019 +0200
|
||||||
now := time.Now()
|
|
||||||
assert.True(t, lct.Unix() < now.Unix(), "%d not smaller than %d", lct, now)
|
|
||||||
// Time is after Mon Oct 23 03:52:09 2017 +0300
|
|
||||||
// which is the time of commit
|
// which is the time of commit
|
||||||
// d47b98c44c9a6472e44ab80efe65235e11c6da2a
|
// feaf4ba6bc635fec442f46ddd4512416ec43c2c2 (refs/heads/master)
|
||||||
refTime, err := time.Parse("Mon Jan 02 15:04:05 2006 -0700", "Mon Oct 23 03:52:09 2017 +0300")
|
assert.EqualValues(t, 1563741793, lct.Unix())
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.True(t, lct.Unix() > refTime.Unix(), "%d not greater than %d", lct, refTime)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRepoIsEmpty(t *testing.T) {
|
func TestRepoIsEmpty(t *testing.T) {
|
||||||
|
|||||||
@@ -35,6 +35,7 @@ func (tag *Tag) Commit() (*Commit, error) {
|
|||||||
// \n\n separate headers from message
|
// \n\n separate headers from message
|
||||||
func parseTagData(data []byte) (*Tag, error) {
|
func parseTagData(data []byte) (*Tag, error) {
|
||||||
tag := new(Tag)
|
tag := new(Tag)
|
||||||
|
tag.Tagger = &Signature{}
|
||||||
// we now have the contents of the commit object. Let's investigate...
|
// we now have the contents of the commit object. Let's investigate...
|
||||||
nextline := 0
|
nextline := 0
|
||||||
l:
|
l:
|
||||||
|
|||||||
@@ -17,6 +17,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"code.gitea.io/gitea/modules/log"
|
"code.gitea.io/gitea/modules/log"
|
||||||
|
"code.gitea.io/gitea/modules/setting"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@@ -26,6 +27,10 @@ var (
|
|||||||
DefaultWriteTimeOut time.Duration
|
DefaultWriteTimeOut time.Duration
|
||||||
// DefaultMaxHeaderBytes default max header bytes
|
// DefaultMaxHeaderBytes default max header bytes
|
||||||
DefaultMaxHeaderBytes int
|
DefaultMaxHeaderBytes int
|
||||||
|
// PerWriteWriteTimeout timeout for writes
|
||||||
|
PerWriteWriteTimeout = 30 * time.Second
|
||||||
|
// PerWriteWriteTimeoutKbTime is a timeout taking account of how much there is to be written
|
||||||
|
PerWriteWriteTimeoutKbTime = 10 * time.Second
|
||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
@@ -37,14 +42,16 @@ type ServeFunction = func(net.Listener) error
|
|||||||
|
|
||||||
// Server represents our graceful server
|
// Server represents our graceful server
|
||||||
type Server struct {
|
type Server struct {
|
||||||
network string
|
network string
|
||||||
address string
|
address string
|
||||||
listener net.Listener
|
listener net.Listener
|
||||||
wg sync.WaitGroup
|
wg sync.WaitGroup
|
||||||
state state
|
state state
|
||||||
lock *sync.RWMutex
|
lock *sync.RWMutex
|
||||||
BeforeBegin func(network, address string)
|
BeforeBegin func(network, address string)
|
||||||
OnShutdown func()
|
OnShutdown func()
|
||||||
|
PerWriteTimeout time.Duration
|
||||||
|
PerWritePerKbTimeout time.Duration
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewServer creates a server on network at provided address
|
// NewServer creates a server on network at provided address
|
||||||
@@ -55,11 +62,13 @@ func NewServer(network, address, name string) *Server {
|
|||||||
log.Info("Starting new %s server: %s:%s on PID: %d", name, network, address, os.Getpid())
|
log.Info("Starting new %s server: %s:%s on PID: %d", name, network, address, os.Getpid())
|
||||||
}
|
}
|
||||||
srv := &Server{
|
srv := &Server{
|
||||||
wg: sync.WaitGroup{},
|
wg: sync.WaitGroup{},
|
||||||
state: stateInit,
|
state: stateInit,
|
||||||
lock: &sync.RWMutex{},
|
lock: &sync.RWMutex{},
|
||||||
network: network,
|
network: network,
|
||||||
address: address,
|
address: address,
|
||||||
|
PerWriteTimeout: setting.PerWriteTimeout,
|
||||||
|
PerWritePerKbTimeout: setting.PerWritePerKbTimeout,
|
||||||
}
|
}
|
||||||
|
|
||||||
srv.BeforeBegin = func(network, addr string) {
|
srv.BeforeBegin = func(network, addr string) {
|
||||||
@@ -221,9 +230,11 @@ func (wl *wrappedListener) Accept() (net.Conn, error) {
|
|||||||
closed := int32(0)
|
closed := int32(0)
|
||||||
|
|
||||||
c = wrappedConn{
|
c = wrappedConn{
|
||||||
Conn: c,
|
Conn: c,
|
||||||
server: wl.server,
|
server: wl.server,
|
||||||
closed: &closed,
|
closed: &closed,
|
||||||
|
perWriteTimeout: wl.server.PerWriteTimeout,
|
||||||
|
perWritePerKbTimeout: wl.server.PerWritePerKbTimeout,
|
||||||
}
|
}
|
||||||
|
|
||||||
wl.server.wg.Add(1)
|
wl.server.wg.Add(1)
|
||||||
@@ -246,8 +257,25 @@ func (wl *wrappedListener) File() (*os.File, error) {
|
|||||||
|
|
||||||
type wrappedConn struct {
|
type wrappedConn struct {
|
||||||
net.Conn
|
net.Conn
|
||||||
server *Server
|
server *Server
|
||||||
closed *int32
|
closed *int32
|
||||||
|
deadline time.Time
|
||||||
|
perWriteTimeout time.Duration
|
||||||
|
perWritePerKbTimeout time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w wrappedConn) Write(p []byte) (n int, err error) {
|
||||||
|
if w.perWriteTimeout > 0 {
|
||||||
|
minTimeout := time.Duration(len(p)/1024) * w.perWritePerKbTimeout
|
||||||
|
minDeadline := time.Now().Add(minTimeout).Add(w.perWriteTimeout)
|
||||||
|
|
||||||
|
w.deadline = w.deadline.Add(minTimeout)
|
||||||
|
if minDeadline.After(w.deadline) {
|
||||||
|
w.deadline = minDeadline
|
||||||
|
}
|
||||||
|
_ = w.Conn.SetWriteDeadline(w.deadline)
|
||||||
|
}
|
||||||
|
return w.Conn.Write(p)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (w wrappedConn) Close() error {
|
func (w wrappedConn) Close() error {
|
||||||
|
|||||||
@@ -87,6 +87,7 @@ func isLinkStr(link string) bool {
|
|||||||
return validLinksPattern.MatchString(link)
|
return validLinksPattern.MatchString(link)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// FIXME: This function is not concurrent safe
|
||||||
func getIssueFullPattern() *regexp.Regexp {
|
func getIssueFullPattern() *regexp.Regexp {
|
||||||
if issueFullPattern == nil {
|
if issueFullPattern == nil {
|
||||||
issueFullPattern = regexp.MustCompile(regexp.QuoteMeta(setting.AppURL) +
|
issueFullPattern = regexp.MustCompile(regexp.QuoteMeta(setting.AppURL) +
|
||||||
@@ -333,40 +334,37 @@ func (ctx *postProcessCtx) postProcess(rawHTML []byte) ([]byte, error) {
|
|||||||
_, _ = res.WriteString("</body></html>")
|
_, _ = res.WriteString("</body></html>")
|
||||||
|
|
||||||
// parse the HTML
|
// parse the HTML
|
||||||
nodes, err := html.ParseFragment(res, nil)
|
node, err := html.Parse(res)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, &postProcessError{"invalid HTML", err}
|
return nil, &postProcessError{"invalid HTML", err}
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, node := range nodes {
|
if node.Type == html.DocumentNode {
|
||||||
ctx.visitNode(node, true)
|
node = node.FirstChild
|
||||||
}
|
}
|
||||||
|
|
||||||
newNodes := make([]*html.Node, 0, len(nodes))
|
ctx.visitNode(node, true)
|
||||||
|
|
||||||
for _, node := range nodes {
|
nodes := make([]*html.Node, 0, 5)
|
||||||
if node.Data == "html" {
|
|
||||||
node = node.FirstChild
|
if node.Data == "html" {
|
||||||
for node != nil && node.Data != "body" {
|
node = node.FirstChild
|
||||||
node = node.NextSibling
|
for node != nil && node.Data != "body" {
|
||||||
}
|
node = node.NextSibling
|
||||||
}
|
|
||||||
if node == nil {
|
|
||||||
continue
|
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
if node != nil {
|
||||||
if node.Data == "body" {
|
if node.Data == "body" {
|
||||||
child := node.FirstChild
|
child := node.FirstChild
|
||||||
for child != nil {
|
for child != nil {
|
||||||
newNodes = append(newNodes, child)
|
nodes = append(nodes, child)
|
||||||
child = child.NextSibling
|
child = child.NextSibling
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
newNodes = append(newNodes, node)
|
nodes = append(nodes, node)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
nodes = newNodes
|
|
||||||
|
|
||||||
// Create buffer in which the data will be placed again. We know that the
|
// Create buffer in which the data will be placed again. We know that the
|
||||||
// length will be at least that of res; to spare a few alloc+copy, we
|
// length will be at least that of res; to spare a few alloc+copy, we
|
||||||
// reuse res, resetting its length to 0.
|
// reuse res, resetting its length to 0.
|
||||||
@@ -403,24 +401,20 @@ func (ctx *postProcessCtx) visitNode(node *html.Node, visitText bool) {
|
|||||||
}
|
}
|
||||||
case html.ElementNode:
|
case html.ElementNode:
|
||||||
if node.Data == "img" {
|
if node.Data == "img" {
|
||||||
attrs := node.Attr
|
for i, attr := range node.Attr {
|
||||||
for idx, attr := range attrs {
|
|
||||||
if attr.Key != "src" {
|
if attr.Key != "src" {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
link := []byte(attr.Val)
|
if len(attr.Val) > 0 && !isLinkStr(attr.Val) && !strings.HasPrefix(attr.Val, "data:image/") {
|
||||||
if len(link) > 0 && !IsLink(link) {
|
|
||||||
prefix := ctx.urlPrefix
|
prefix := ctx.urlPrefix
|
||||||
if ctx.isWikiMarkdown {
|
if ctx.isWikiMarkdown {
|
||||||
prefix = util.URLJoin(prefix, "wiki", "raw")
|
prefix = util.URLJoin(prefix, "wiki", "raw")
|
||||||
}
|
}
|
||||||
prefix = strings.Replace(prefix, "/src/", "/media/", 1)
|
prefix = strings.Replace(prefix, "/src/", "/media/", 1)
|
||||||
|
|
||||||
lnk := string(link)
|
attr.Val = util.URLJoin(prefix, attr.Val)
|
||||||
lnk = util.URLJoin(prefix, lnk)
|
|
||||||
link = []byte(lnk)
|
|
||||||
}
|
}
|
||||||
node.Attr[idx].Val = string(link)
|
node.Attr[i] = attr
|
||||||
}
|
}
|
||||||
} else if node.Data == "a" {
|
} else if node.Data == "a" {
|
||||||
visitText = false
|
visitText = false
|
||||||
@@ -610,26 +604,38 @@ func replaceContentList(node *html.Node, i, j int, newNodes []*html.Node) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func mentionProcessor(ctx *postProcessCtx, node *html.Node) {
|
func mentionProcessor(ctx *postProcessCtx, node *html.Node) {
|
||||||
// We replace only the first mention; other mentions will be addressed later
|
start := 0
|
||||||
found, loc := references.FindFirstMentionBytes([]byte(node.Data))
|
next := node.NextSibling
|
||||||
if !found {
|
for node != nil && node != next && start < len(node.Data) {
|
||||||
return
|
// We replace only the first mention; other mentions will be addressed later
|
||||||
}
|
found, loc := references.FindFirstMentionBytes([]byte(node.Data[start:]))
|
||||||
mention := node.Data[loc.Start:loc.End]
|
if !found {
|
||||||
var teams string
|
return
|
||||||
teams, ok := ctx.metas["teams"]
|
|
||||||
// FIXME: util.URLJoin may not be necessary here:
|
|
||||||
// - setting.AppURL is defined to have a terminal '/' so unless mention[1:]
|
|
||||||
// is an AppSubURL link we can probably fallback to concatenation.
|
|
||||||
// team mention should follow @orgName/teamName style
|
|
||||||
if ok && strings.Contains(mention, "/") {
|
|
||||||
mentionOrgAndTeam := strings.Split(mention, "/")
|
|
||||||
if mentionOrgAndTeam[0][1:] == ctx.metas["org"] && strings.Contains(teams, ","+strings.ToLower(mentionOrgAndTeam[1])+",") {
|
|
||||||
replaceContent(node, loc.Start, loc.End, createLink(util.URLJoin(setting.AppURL, "org", ctx.metas["org"], "teams", mentionOrgAndTeam[1]), mention, "mention"))
|
|
||||||
}
|
}
|
||||||
return
|
loc.Start += start
|
||||||
|
loc.End += start
|
||||||
|
mention := node.Data[loc.Start:loc.End]
|
||||||
|
var teams string
|
||||||
|
teams, ok := ctx.metas["teams"]
|
||||||
|
// FIXME: util.URLJoin may not be necessary here:
|
||||||
|
// - setting.AppURL is defined to have a terminal '/' so unless mention[1:]
|
||||||
|
// is an AppSubURL link we can probably fallback to concatenation.
|
||||||
|
// team mention should follow @orgName/teamName style
|
||||||
|
if ok && strings.Contains(mention, "/") {
|
||||||
|
mentionOrgAndTeam := strings.Split(mention, "/")
|
||||||
|
if mentionOrgAndTeam[0][1:] == ctx.metas["org"] && strings.Contains(teams, ","+strings.ToLower(mentionOrgAndTeam[1])+",") {
|
||||||
|
replaceContent(node, loc.Start, loc.End, createLink(util.URLJoin(setting.AppURL, "org", ctx.metas["org"], "teams", mentionOrgAndTeam[1]), mention, "mention"))
|
||||||
|
node = node.NextSibling.NextSibling
|
||||||
|
start = 0
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
start = loc.End
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
replaceContent(node, loc.Start, loc.End, createLink(util.URLJoin(setting.AppURL, mention[1:]), mention, "mention"))
|
||||||
|
node = node.NextSibling.NextSibling
|
||||||
|
start = 0
|
||||||
}
|
}
|
||||||
replaceContent(node, loc.Start, loc.End, createLink(util.URLJoin(setting.AppURL, mention[1:]), mention, "mention"))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func shortLinkProcessor(ctx *postProcessCtx, node *html.Node) {
|
func shortLinkProcessor(ctx *postProcessCtx, node *html.Node) {
|
||||||
@@ -637,188 +643,195 @@ func shortLinkProcessor(ctx *postProcessCtx, node *html.Node) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func shortLinkProcessorFull(ctx *postProcessCtx, node *html.Node, noLink bool) {
|
func shortLinkProcessorFull(ctx *postProcessCtx, node *html.Node, noLink bool) {
|
||||||
m := shortLinkPattern.FindStringSubmatchIndex(node.Data)
|
next := node.NextSibling
|
||||||
if m == nil {
|
for node != nil && node != next {
|
||||||
return
|
m := shortLinkPattern.FindStringSubmatchIndex(node.Data)
|
||||||
}
|
if m == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
content := node.Data[m[2]:m[3]]
|
content := node.Data[m[2]:m[3]]
|
||||||
tail := node.Data[m[4]:m[5]]
|
tail := node.Data[m[4]:m[5]]
|
||||||
props := make(map[string]string)
|
props := make(map[string]string)
|
||||||
|
|
||||||
// MediaWiki uses [[link|text]], while GitHub uses [[text|link]]
|
// MediaWiki uses [[link|text]], while GitHub uses [[text|link]]
|
||||||
// It makes page handling terrible, but we prefer GitHub syntax
|
// It makes page handling terrible, but we prefer GitHub syntax
|
||||||
// And fall back to MediaWiki only when it is obvious from the look
|
// And fall back to MediaWiki only when it is obvious from the look
|
||||||
// Of text and link contents
|
// Of text and link contents
|
||||||
sl := strings.Split(content, "|")
|
sl := strings.Split(content, "|")
|
||||||
for _, v := range sl {
|
for _, v := range sl {
|
||||||
if equalPos := strings.IndexByte(v, '='); equalPos == -1 {
|
if equalPos := strings.IndexByte(v, '='); equalPos == -1 {
|
||||||
// There is no equal in this argument; this is a mandatory arg
|
// There is no equal in this argument; this is a mandatory arg
|
||||||
if props["name"] == "" {
|
if props["name"] == "" {
|
||||||
if isLinkStr(v) {
|
if isLinkStr(v) {
|
||||||
// If we clearly see it is a link, we save it so
|
// If we clearly see it is a link, we save it so
|
||||||
|
|
||||||
// But first we need to ensure, that if both mandatory args provided
|
// But first we need to ensure, that if both mandatory args provided
|
||||||
// look like links, we stick to GitHub syntax
|
// look like links, we stick to GitHub syntax
|
||||||
if props["link"] != "" {
|
if props["link"] != "" {
|
||||||
props["name"] = props["link"]
|
props["name"] = props["link"]
|
||||||
|
}
|
||||||
|
|
||||||
|
props["link"] = strings.TrimSpace(v)
|
||||||
|
} else {
|
||||||
|
props["name"] = v
|
||||||
}
|
}
|
||||||
|
|
||||||
props["link"] = strings.TrimSpace(v)
|
|
||||||
} else {
|
} else {
|
||||||
props["name"] = v
|
props["link"] = strings.TrimSpace(v)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
props["link"] = strings.TrimSpace(v)
|
// There is an equal; optional argument.
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// There is an equal; optional argument.
|
|
||||||
|
|
||||||
sep := strings.IndexByte(v, '=')
|
sep := strings.IndexByte(v, '=')
|
||||||
key, val := v[:sep], html.UnescapeString(v[sep+1:])
|
key, val := v[:sep], html.UnescapeString(v[sep+1:])
|
||||||
|
|
||||||
// When parsing HTML, x/net/html will change all quotes which are
|
// When parsing HTML, x/net/html will change all quotes which are
|
||||||
// not used for syntax into UTF-8 quotes. So checking val[0] won't
|
// not used for syntax into UTF-8 quotes. So checking val[0] won't
|
||||||
// be enough, since that only checks a single byte.
|
// be enough, since that only checks a single byte.
|
||||||
if len(val) > 1 {
|
if len(val) > 1 {
|
||||||
if (strings.HasPrefix(val, "“") && strings.HasSuffix(val, "”")) ||
|
if (strings.HasPrefix(val, "“") && strings.HasSuffix(val, "”")) ||
|
||||||
(strings.HasPrefix(val, "‘") && strings.HasSuffix(val, "’")) {
|
(strings.HasPrefix(val, "‘") && strings.HasSuffix(val, "’")) {
|
||||||
const lenQuote = len("‘")
|
const lenQuote = len("‘")
|
||||||
val = val[lenQuote : len(val)-lenQuote]
|
val = val[lenQuote : len(val)-lenQuote]
|
||||||
} else if (strings.HasPrefix(val, "\"") && strings.HasSuffix(val, "\"")) ||
|
} else if (strings.HasPrefix(val, "\"") && strings.HasSuffix(val, "\"")) ||
|
||||||
(strings.HasPrefix(val, "'") && strings.HasSuffix(val, "'")) {
|
(strings.HasPrefix(val, "'") && strings.HasSuffix(val, "'")) {
|
||||||
val = val[1 : len(val)-1]
|
val = val[1 : len(val)-1]
|
||||||
} else if strings.HasPrefix(val, "'") && strings.HasSuffix(val, "’") {
|
} else if strings.HasPrefix(val, "'") && strings.HasSuffix(val, "’") {
|
||||||
const lenQuote = len("‘")
|
const lenQuote = len("‘")
|
||||||
val = val[1 : len(val)-lenQuote]
|
val = val[1 : len(val)-lenQuote]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
props[key] = val
|
||||||
}
|
}
|
||||||
props[key] = val
|
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
var name, link string
|
var name, link string
|
||||||
if props["link"] != "" {
|
if props["link"] != "" {
|
||||||
link = props["link"]
|
link = props["link"]
|
||||||
} else if props["name"] != "" {
|
} else if props["name"] != "" {
|
||||||
link = props["name"]
|
link = props["name"]
|
||||||
}
|
}
|
||||||
if props["title"] != "" {
|
if props["title"] != "" {
|
||||||
name = props["title"]
|
name = props["title"]
|
||||||
} else if props["name"] != "" {
|
} else if props["name"] != "" {
|
||||||
name = props["name"]
|
name = props["name"]
|
||||||
} else {
|
|
||||||
name = link
|
|
||||||
}
|
|
||||||
|
|
||||||
name += tail
|
|
||||||
image := false
|
|
||||||
switch ext := filepath.Ext(link); ext {
|
|
||||||
// fast path: empty string, ignore
|
|
||||||
case "":
|
|
||||||
break
|
|
||||||
case ".jpg", ".jpeg", ".png", ".tif", ".tiff", ".webp", ".gif", ".bmp", ".ico", ".svg":
|
|
||||||
image = true
|
|
||||||
}
|
|
||||||
|
|
||||||
childNode := &html.Node{}
|
|
||||||
linkNode := &html.Node{
|
|
||||||
FirstChild: childNode,
|
|
||||||
LastChild: childNode,
|
|
||||||
Type: html.ElementNode,
|
|
||||||
Data: "a",
|
|
||||||
DataAtom: atom.A,
|
|
||||||
}
|
|
||||||
childNode.Parent = linkNode
|
|
||||||
absoluteLink := isLinkStr(link)
|
|
||||||
if !absoluteLink {
|
|
||||||
if image {
|
|
||||||
link = strings.ReplaceAll(link, " ", "+")
|
|
||||||
} else {
|
} else {
|
||||||
link = strings.ReplaceAll(link, " ", "-")
|
name = link
|
||||||
}
|
|
||||||
if !strings.Contains(link, "/") {
|
|
||||||
link = url.PathEscape(link)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
urlPrefix := ctx.urlPrefix
|
|
||||||
if image {
|
|
||||||
if !absoluteLink {
|
|
||||||
if IsSameDomain(urlPrefix) {
|
|
||||||
urlPrefix = strings.Replace(urlPrefix, "/src/", "/raw/", 1)
|
|
||||||
}
|
|
||||||
if ctx.isWikiMarkdown {
|
|
||||||
link = util.URLJoin("wiki", "raw", link)
|
|
||||||
}
|
|
||||||
link = util.URLJoin(urlPrefix, link)
|
|
||||||
}
|
|
||||||
title := props["title"]
|
|
||||||
if title == "" {
|
|
||||||
title = props["alt"]
|
|
||||||
}
|
|
||||||
if title == "" {
|
|
||||||
title = path.Base(name)
|
|
||||||
}
|
|
||||||
alt := props["alt"]
|
|
||||||
if alt == "" {
|
|
||||||
alt = name
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// make the childNode an image - if we can, we also place the alt
|
name += tail
|
||||||
childNode.Type = html.ElementNode
|
image := false
|
||||||
childNode.Data = "img"
|
switch ext := filepath.Ext(link); ext {
|
||||||
childNode.DataAtom = atom.Img
|
// fast path: empty string, ignore
|
||||||
childNode.Attr = []html.Attribute{
|
case "":
|
||||||
{Key: "src", Val: link},
|
// leave image as false
|
||||||
{Key: "title", Val: title},
|
case ".jpg", ".jpeg", ".png", ".tif", ".tiff", ".webp", ".gif", ".bmp", ".ico", ".svg":
|
||||||
{Key: "alt", Val: alt},
|
image = true
|
||||||
}
|
}
|
||||||
if alt == "" {
|
|
||||||
childNode.Attr = childNode.Attr[:2]
|
childNode := &html.Node{}
|
||||||
|
linkNode := &html.Node{
|
||||||
|
FirstChild: childNode,
|
||||||
|
LastChild: childNode,
|
||||||
|
Type: html.ElementNode,
|
||||||
|
Data: "a",
|
||||||
|
DataAtom: atom.A,
|
||||||
}
|
}
|
||||||
} else {
|
childNode.Parent = linkNode
|
||||||
|
absoluteLink := isLinkStr(link)
|
||||||
if !absoluteLink {
|
if !absoluteLink {
|
||||||
if ctx.isWikiMarkdown {
|
if image {
|
||||||
link = util.URLJoin("wiki", link)
|
link = strings.ReplaceAll(link, " ", "+")
|
||||||
|
} else {
|
||||||
|
link = strings.ReplaceAll(link, " ", "-")
|
||||||
|
}
|
||||||
|
if !strings.Contains(link, "/") {
|
||||||
|
link = url.PathEscape(link)
|
||||||
}
|
}
|
||||||
link = util.URLJoin(urlPrefix, link)
|
|
||||||
}
|
}
|
||||||
childNode.Type = html.TextNode
|
urlPrefix := ctx.urlPrefix
|
||||||
childNode.Data = name
|
if image {
|
||||||
|
if !absoluteLink {
|
||||||
|
if IsSameDomain(urlPrefix) {
|
||||||
|
urlPrefix = strings.Replace(urlPrefix, "/src/", "/raw/", 1)
|
||||||
|
}
|
||||||
|
if ctx.isWikiMarkdown {
|
||||||
|
link = util.URLJoin("wiki", "raw", link)
|
||||||
|
}
|
||||||
|
link = util.URLJoin(urlPrefix, link)
|
||||||
|
}
|
||||||
|
title := props["title"]
|
||||||
|
if title == "" {
|
||||||
|
title = props["alt"]
|
||||||
|
}
|
||||||
|
if title == "" {
|
||||||
|
title = path.Base(name)
|
||||||
|
}
|
||||||
|
alt := props["alt"]
|
||||||
|
if alt == "" {
|
||||||
|
alt = name
|
||||||
|
}
|
||||||
|
|
||||||
|
// make the childNode an image - if we can, we also place the alt
|
||||||
|
childNode.Type = html.ElementNode
|
||||||
|
childNode.Data = "img"
|
||||||
|
childNode.DataAtom = atom.Img
|
||||||
|
childNode.Attr = []html.Attribute{
|
||||||
|
{Key: "src", Val: link},
|
||||||
|
{Key: "title", Val: title},
|
||||||
|
{Key: "alt", Val: alt},
|
||||||
|
}
|
||||||
|
if alt == "" {
|
||||||
|
childNode.Attr = childNode.Attr[:2]
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if !absoluteLink {
|
||||||
|
if ctx.isWikiMarkdown {
|
||||||
|
link = util.URLJoin("wiki", link)
|
||||||
|
}
|
||||||
|
link = util.URLJoin(urlPrefix, link)
|
||||||
|
}
|
||||||
|
childNode.Type = html.TextNode
|
||||||
|
childNode.Data = name
|
||||||
|
}
|
||||||
|
if noLink {
|
||||||
|
linkNode = childNode
|
||||||
|
} else {
|
||||||
|
linkNode.Attr = []html.Attribute{{Key: "href", Val: link}}
|
||||||
|
}
|
||||||
|
replaceContent(node, m[0], m[1], linkNode)
|
||||||
|
node = node.NextSibling.NextSibling
|
||||||
}
|
}
|
||||||
if noLink {
|
|
||||||
linkNode = childNode
|
|
||||||
} else {
|
|
||||||
linkNode.Attr = []html.Attribute{{Key: "href", Val: link}}
|
|
||||||
}
|
|
||||||
replaceContent(node, m[0], m[1], linkNode)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func fullIssuePatternProcessor(ctx *postProcessCtx, node *html.Node) {
|
func fullIssuePatternProcessor(ctx *postProcessCtx, node *html.Node) {
|
||||||
if ctx.metas == nil {
|
if ctx.metas == nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
m := getIssueFullPattern().FindStringSubmatchIndex(node.Data)
|
next := node.NextSibling
|
||||||
if m == nil {
|
for node != nil && node != next {
|
||||||
return
|
m := getIssueFullPattern().FindStringSubmatchIndex(node.Data)
|
||||||
}
|
if m == nil {
|
||||||
link := node.Data[m[0]:m[1]]
|
return
|
||||||
id := "#" + node.Data[m[2]:m[3]]
|
}
|
||||||
|
link := node.Data[m[0]:m[1]]
|
||||||
|
id := "#" + node.Data[m[2]:m[3]]
|
||||||
|
|
||||||
// extract repo and org name from matched link like
|
// extract repo and org name from matched link like
|
||||||
// http://localhost:3000/gituser/myrepo/issues/1
|
// http://localhost:3000/gituser/myrepo/issues/1
|
||||||
linkParts := strings.Split(path.Clean(link), "/")
|
linkParts := strings.Split(path.Clean(link), "/")
|
||||||
matchOrg := linkParts[len(linkParts)-4]
|
matchOrg := linkParts[len(linkParts)-4]
|
||||||
matchRepo := linkParts[len(linkParts)-3]
|
matchRepo := linkParts[len(linkParts)-3]
|
||||||
|
|
||||||
if matchOrg == ctx.metas["user"] && matchRepo == ctx.metas["repo"] {
|
if matchOrg == ctx.metas["user"] && matchRepo == ctx.metas["repo"] {
|
||||||
// TODO if m[4]:m[5] is not nil, then link is to a comment,
|
// TODO if m[4]:m[5] is not nil, then link is to a comment,
|
||||||
// and we should indicate that in the text somehow
|
// and we should indicate that in the text somehow
|
||||||
replaceContent(node, m[0], m[1], createLink(link, id, "ref-issue"))
|
replaceContent(node, m[0], m[1], createLink(link, id, "ref-issue"))
|
||||||
|
} else {
|
||||||
} else {
|
orgRepoID := matchOrg + "/" + matchRepo + id
|
||||||
orgRepoID := matchOrg + "/" + matchRepo + id
|
replaceContent(node, m[0], m[1], createLink(link, orgRepoID, "ref-issue"))
|
||||||
replaceContent(node, m[0], m[1], createLink(link, orgRepoID, "ref-issue"))
|
}
|
||||||
|
node = node.NextSibling.NextSibling
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -826,70 +839,74 @@ func issueIndexPatternProcessor(ctx *postProcessCtx, node *html.Node) {
|
|||||||
if ctx.metas == nil {
|
if ctx.metas == nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var (
|
||||||
found bool
|
found bool
|
||||||
ref *references.RenderizableReference
|
ref *references.RenderizableReference
|
||||||
)
|
)
|
||||||
|
|
||||||
_, exttrack := ctx.metas["format"]
|
next := node.NextSibling
|
||||||
alphanum := ctx.metas["style"] == IssueNameStyleAlphanumeric
|
for node != nil && node != next {
|
||||||
|
_, exttrack := ctx.metas["format"]
|
||||||
|
alphanum := ctx.metas["style"] == IssueNameStyleAlphanumeric
|
||||||
|
|
||||||
// Repos with external issue trackers might still need to reference local PRs
|
// Repos with external issue trackers might still need to reference local PRs
|
||||||
// We need to concern with the first one that shows up in the text, whichever it is
|
// We need to concern with the first one that shows up in the text, whichever it is
|
||||||
found, ref = references.FindRenderizableReferenceNumeric(node.Data, exttrack && alphanum)
|
found, ref = references.FindRenderizableReferenceNumeric(node.Data, exttrack && alphanum)
|
||||||
if exttrack && alphanum {
|
if exttrack && alphanum {
|
||||||
if found2, ref2 := references.FindRenderizableReferenceAlphanumeric(node.Data); found2 {
|
if found2, ref2 := references.FindRenderizableReferenceAlphanumeric(node.Data); found2 {
|
||||||
if !found || ref2.RefLocation.Start < ref.RefLocation.Start {
|
if !found || ref2.RefLocation.Start < ref.RefLocation.Start {
|
||||||
found = true
|
found = true
|
||||||
ref = ref2
|
ref = ref2
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
if !found {
|
||||||
if !found {
|
return
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
var link *html.Node
|
|
||||||
reftext := node.Data[ref.RefLocation.Start:ref.RefLocation.End]
|
|
||||||
if exttrack && !ref.IsPull {
|
|
||||||
ctx.metas["index"] = ref.Issue
|
|
||||||
link = createLink(com.Expand(ctx.metas["format"], ctx.metas), reftext, "ref-issue")
|
|
||||||
} else {
|
|
||||||
// Path determines the type of link that will be rendered. It's unknown at this point whether
|
|
||||||
// the linked item is actually a PR or an issue. Luckily it's of no real consequence because
|
|
||||||
// Gitea will redirect on click as appropriate.
|
|
||||||
path := "issues"
|
|
||||||
if ref.IsPull {
|
|
||||||
path = "pulls"
|
|
||||||
}
|
}
|
||||||
if ref.Owner == "" {
|
|
||||||
link = createLink(util.URLJoin(setting.AppURL, ctx.metas["user"], ctx.metas["repo"], path, ref.Issue), reftext, "ref-issue")
|
var link *html.Node
|
||||||
|
reftext := node.Data[ref.RefLocation.Start:ref.RefLocation.End]
|
||||||
|
if exttrack && !ref.IsPull {
|
||||||
|
ctx.metas["index"] = ref.Issue
|
||||||
|
link = createLink(com.Expand(ctx.metas["format"], ctx.metas), reftext, "ref-issue")
|
||||||
} else {
|
} else {
|
||||||
link = createLink(util.URLJoin(setting.AppURL, ref.Owner, ref.Name, path, ref.Issue), reftext, "ref-issue")
|
// Path determines the type of link that will be rendered. It's unknown at this point whether
|
||||||
|
// the linked item is actually a PR or an issue. Luckily it's of no real consequence because
|
||||||
|
// Gitea will redirect on click as appropriate.
|
||||||
|
path := "issues"
|
||||||
|
if ref.IsPull {
|
||||||
|
path = "pulls"
|
||||||
|
}
|
||||||
|
if ref.Owner == "" {
|
||||||
|
link = createLink(util.URLJoin(setting.AppURL, ctx.metas["user"], ctx.metas["repo"], path, ref.Issue), reftext, "ref-issue")
|
||||||
|
} else {
|
||||||
|
link = createLink(util.URLJoin(setting.AppURL, ref.Owner, ref.Name, path, ref.Issue), reftext, "ref-issue")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
if ref.Action == references.XRefActionNone {
|
if ref.Action == references.XRefActionNone {
|
||||||
replaceContent(node, ref.RefLocation.Start, ref.RefLocation.End, link)
|
replaceContent(node, ref.RefLocation.Start, ref.RefLocation.End, link)
|
||||||
return
|
node = node.NextSibling.NextSibling
|
||||||
}
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
// Decorate action keywords if actionable
|
// Decorate action keywords if actionable
|
||||||
var keyword *html.Node
|
var keyword *html.Node
|
||||||
if references.IsXrefActionable(ref, exttrack, alphanum) {
|
if references.IsXrefActionable(ref, exttrack, alphanum) {
|
||||||
keyword = createKeyword(node.Data[ref.ActionLocation.Start:ref.ActionLocation.End])
|
keyword = createKeyword(node.Data[ref.ActionLocation.Start:ref.ActionLocation.End])
|
||||||
} else {
|
} else {
|
||||||
keyword = &html.Node{
|
keyword = &html.Node{
|
||||||
|
Type: html.TextNode,
|
||||||
|
Data: node.Data[ref.ActionLocation.Start:ref.ActionLocation.End],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
spaces := &html.Node{
|
||||||
Type: html.TextNode,
|
Type: html.TextNode,
|
||||||
Data: node.Data[ref.ActionLocation.Start:ref.ActionLocation.End],
|
Data: node.Data[ref.ActionLocation.End:ref.RefLocation.Start],
|
||||||
}
|
}
|
||||||
|
replaceContentList(node, ref.ActionLocation.Start, ref.RefLocation.End, []*html.Node{keyword, spaces, link})
|
||||||
|
node = node.NextSibling.NextSibling.NextSibling.NextSibling
|
||||||
}
|
}
|
||||||
spaces := &html.Node{
|
|
||||||
Type: html.TextNode,
|
|
||||||
Data: node.Data[ref.ActionLocation.End:ref.RefLocation.Start],
|
|
||||||
}
|
|
||||||
replaceContentList(node, ref.ActionLocation.Start, ref.RefLocation.End, []*html.Node{keyword, spaces, link})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// fullSha1PatternProcessor renders SHA containing URLs
|
// fullSha1PatternProcessor renders SHA containing URLs
|
||||||
@@ -897,87 +914,112 @@ func fullSha1PatternProcessor(ctx *postProcessCtx, node *html.Node) {
|
|||||||
if ctx.metas == nil {
|
if ctx.metas == nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
m := anySHA1Pattern.FindStringSubmatchIndex(node.Data)
|
|
||||||
if m == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
urlFull := node.Data[m[0]:m[1]]
|
next := node.NextSibling
|
||||||
text := base.ShortSha(node.Data[m[2]:m[3]])
|
for node != nil && node != next {
|
||||||
|
m := anySHA1Pattern.FindStringSubmatchIndex(node.Data)
|
||||||
// 3rd capture group matches a optional path
|
if m == nil {
|
||||||
subpath := ""
|
return
|
||||||
if m[5] > 0 {
|
|
||||||
subpath = node.Data[m[4]:m[5]]
|
|
||||||
}
|
|
||||||
|
|
||||||
// 4th capture group matches a optional url hash
|
|
||||||
hash := ""
|
|
||||||
if m[7] > 0 {
|
|
||||||
hash = node.Data[m[6]:m[7]][1:]
|
|
||||||
}
|
|
||||||
|
|
||||||
start := m[0]
|
|
||||||
end := m[1]
|
|
||||||
|
|
||||||
// If url ends in '.', it's very likely that it is not part of the
|
|
||||||
// actual url but used to finish a sentence.
|
|
||||||
if strings.HasSuffix(urlFull, ".") {
|
|
||||||
end--
|
|
||||||
urlFull = urlFull[:len(urlFull)-1]
|
|
||||||
if hash != "" {
|
|
||||||
hash = hash[:len(hash)-1]
|
|
||||||
} else if subpath != "" {
|
|
||||||
subpath = subpath[:len(subpath)-1]
|
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
if subpath != "" {
|
urlFull := node.Data[m[0]:m[1]]
|
||||||
text += subpath
|
text := base.ShortSha(node.Data[m[2]:m[3]])
|
||||||
}
|
|
||||||
|
|
||||||
if hash != "" {
|
// 3rd capture group matches a optional path
|
||||||
text += " (" + hash + ")"
|
subpath := ""
|
||||||
}
|
if m[5] > 0 {
|
||||||
|
subpath = node.Data[m[4]:m[5]]
|
||||||
|
}
|
||||||
|
|
||||||
replaceContent(node, start, end, createCodeLink(urlFull, text, "commit"))
|
// 4th capture group matches a optional url hash
|
||||||
|
hash := ""
|
||||||
|
if m[7] > 0 {
|
||||||
|
hash = node.Data[m[6]:m[7]][1:]
|
||||||
|
}
|
||||||
|
|
||||||
|
start := m[0]
|
||||||
|
end := m[1]
|
||||||
|
|
||||||
|
// If url ends in '.', it's very likely that it is not part of the
|
||||||
|
// actual url but used to finish a sentence.
|
||||||
|
if strings.HasSuffix(urlFull, ".") {
|
||||||
|
end--
|
||||||
|
urlFull = urlFull[:len(urlFull)-1]
|
||||||
|
if hash != "" {
|
||||||
|
hash = hash[:len(hash)-1]
|
||||||
|
} else if subpath != "" {
|
||||||
|
subpath = subpath[:len(subpath)-1]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if subpath != "" {
|
||||||
|
text += subpath
|
||||||
|
}
|
||||||
|
|
||||||
|
if hash != "" {
|
||||||
|
text += " (" + hash + ")"
|
||||||
|
}
|
||||||
|
|
||||||
|
replaceContent(node, start, end, createCodeLink(urlFull, text, "commit"))
|
||||||
|
node = node.NextSibling.NextSibling
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// emojiShortCodeProcessor for rendering text like :smile: into emoji
|
// emojiShortCodeProcessor for rendering text like :smile: into emoji
|
||||||
func emojiShortCodeProcessor(ctx *postProcessCtx, node *html.Node) {
|
func emojiShortCodeProcessor(ctx *postProcessCtx, node *html.Node) {
|
||||||
|
start := 0
|
||||||
m := EmojiShortCodeRegex.FindStringSubmatchIndex(node.Data)
|
next := node.NextSibling
|
||||||
if m == nil {
|
for node != nil && node != next && start < len(node.Data) {
|
||||||
return
|
m := EmojiShortCodeRegex.FindStringSubmatchIndex(node.Data[start:])
|
||||||
}
|
if m == nil {
|
||||||
|
|
||||||
alias := node.Data[m[0]:m[1]]
|
|
||||||
alias = strings.ReplaceAll(alias, ":", "")
|
|
||||||
converted := emoji.FromAlias(alias)
|
|
||||||
if converted == nil {
|
|
||||||
// check if this is a custom reaction
|
|
||||||
s := strings.Join(setting.UI.Reactions, " ") + "gitea"
|
|
||||||
if strings.Contains(s, alias) {
|
|
||||||
replaceContent(node, m[0], m[1], createCustomEmoji(alias, "emoji"))
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
return
|
m[0] += start
|
||||||
}
|
m[1] += start
|
||||||
|
|
||||||
replaceContent(node, m[0], m[1], createEmoji(converted.Emoji, "emoji", converted.Description))
|
start = m[1]
|
||||||
|
|
||||||
|
alias := node.Data[m[0]:m[1]]
|
||||||
|
alias = strings.ReplaceAll(alias, ":", "")
|
||||||
|
converted := emoji.FromAlias(alias)
|
||||||
|
if converted == nil {
|
||||||
|
// check if this is a custom reaction
|
||||||
|
s := strings.Join(setting.UI.Reactions, " ") + "gitea"
|
||||||
|
if strings.Contains(s, alias) {
|
||||||
|
replaceContent(node, m[0], m[1], createCustomEmoji(alias, "emoji"))
|
||||||
|
node = node.NextSibling.NextSibling
|
||||||
|
start = 0
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
replaceContent(node, m[0], m[1], createEmoji(converted.Emoji, "emoji", converted.Description))
|
||||||
|
node = node.NextSibling.NextSibling
|
||||||
|
start = 0
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// emoji processor to match emoji and add emoji class
|
// emoji processor to match emoji and add emoji class
|
||||||
func emojiProcessor(ctx *postProcessCtx, node *html.Node) {
|
func emojiProcessor(ctx *postProcessCtx, node *html.Node) {
|
||||||
m := emoji.FindEmojiSubmatchIndex(node.Data)
|
start := 0
|
||||||
if m == nil {
|
next := node.NextSibling
|
||||||
return
|
for node != nil && node != next && start < len(node.Data) {
|
||||||
}
|
m := emoji.FindEmojiSubmatchIndex(node.Data[start:])
|
||||||
|
if m == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
m[0] += start
|
||||||
|
m[1] += start
|
||||||
|
|
||||||
codepoint := node.Data[m[0]:m[1]]
|
codepoint := node.Data[m[0]:m[1]]
|
||||||
val := emoji.FromCode(codepoint)
|
start = m[1]
|
||||||
if val != nil {
|
val := emoji.FromCode(codepoint)
|
||||||
replaceContent(node, m[0], m[1], createEmoji(codepoint, "emoji", val.Description))
|
if val != nil {
|
||||||
|
replaceContent(node, m[0], m[1], createEmoji(codepoint, "emoji", val.Description))
|
||||||
|
node = node.NextSibling.NextSibling
|
||||||
|
start = 0
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -987,49 +1029,69 @@ func sha1CurrentPatternProcessor(ctx *postProcessCtx, node *html.Node) {
|
|||||||
if ctx.metas == nil || ctx.metas["user"] == "" || ctx.metas["repo"] == "" || ctx.metas["repoPath"] == "" {
|
if ctx.metas == nil || ctx.metas["user"] == "" || ctx.metas["repo"] == "" || ctx.metas["repoPath"] == "" {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
m := sha1CurrentPattern.FindStringSubmatchIndex(node.Data)
|
|
||||||
if m == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
hash := node.Data[m[2]:m[3]]
|
|
||||||
// The regex does not lie, it matches the hash pattern.
|
|
||||||
// However, a regex cannot know if a hash actually exists or not.
|
|
||||||
// We could assume that a SHA1 hash should probably contain alphas AND numerics
|
|
||||||
// but that is not always the case.
|
|
||||||
// Although unlikely, deadbeef and 1234567 are valid short forms of SHA1 hash
|
|
||||||
// as used by git and github for linking and thus we have to do similar.
|
|
||||||
// Because of this, we check to make sure that a matched hash is actually
|
|
||||||
// a commit in the repository before making it a link.
|
|
||||||
if _, err := git.NewCommand("rev-parse", "--verify", hash).RunInDirBytes(ctx.metas["repoPath"]); err != nil {
|
|
||||||
if !strings.Contains(err.Error(), "fatal: Needed a single revision") {
|
|
||||||
log.Debug("sha1CurrentPatternProcessor git rev-parse: %v", err)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
replaceContent(node, m[2], m[3],
|
start := 0
|
||||||
createCodeLink(util.URLJoin(setting.AppURL, ctx.metas["user"], ctx.metas["repo"], "commit", hash), base.ShortSha(hash), "commit"))
|
next := node.NextSibling
|
||||||
|
for node != nil && node != next && start < len(node.Data) {
|
||||||
|
m := sha1CurrentPattern.FindStringSubmatchIndex(node.Data[start:])
|
||||||
|
if m == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
m[2] += start
|
||||||
|
m[3] += start
|
||||||
|
|
||||||
|
hash := node.Data[m[2]:m[3]]
|
||||||
|
// The regex does not lie, it matches the hash pattern.
|
||||||
|
// However, a regex cannot know if a hash actually exists or not.
|
||||||
|
// We could assume that a SHA1 hash should probably contain alphas AND numerics
|
||||||
|
// but that is not always the case.
|
||||||
|
// Although unlikely, deadbeef and 1234567 are valid short forms of SHA1 hash
|
||||||
|
// as used by git and github for linking and thus we have to do similar.
|
||||||
|
// Because of this, we check to make sure that a matched hash is actually
|
||||||
|
// a commit in the repository before making it a link.
|
||||||
|
if _, err := git.NewCommand("rev-parse", "--verify", hash).RunInDirBytes(ctx.metas["repoPath"]); err != nil {
|
||||||
|
if !strings.Contains(err.Error(), "fatal: Needed a single revision") {
|
||||||
|
log.Debug("sha1CurrentPatternProcessor git rev-parse: %v", err)
|
||||||
|
}
|
||||||
|
start = m[3]
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
replaceContent(node, m[2], m[3],
|
||||||
|
createCodeLink(util.URLJoin(setting.AppURL, ctx.metas["user"], ctx.metas["repo"], "commit", hash), base.ShortSha(hash), "commit"))
|
||||||
|
start = 0
|
||||||
|
node = node.NextSibling.NextSibling
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// emailAddressProcessor replaces raw email addresses with a mailto: link.
|
// emailAddressProcessor replaces raw email addresses with a mailto: link.
|
||||||
func emailAddressProcessor(ctx *postProcessCtx, node *html.Node) {
|
func emailAddressProcessor(ctx *postProcessCtx, node *html.Node) {
|
||||||
m := emailRegex.FindStringSubmatchIndex(node.Data)
|
next := node.NextSibling
|
||||||
if m == nil {
|
for node != nil && node != next {
|
||||||
return
|
m := emailRegex.FindStringSubmatchIndex(node.Data)
|
||||||
|
if m == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
mail := node.Data[m[2]:m[3]]
|
||||||
|
replaceContent(node, m[2], m[3], createLink("mailto:"+mail, mail, "mailto"))
|
||||||
|
node = node.NextSibling.NextSibling
|
||||||
}
|
}
|
||||||
mail := node.Data[m[2]:m[3]]
|
|
||||||
replaceContent(node, m[2], m[3], createLink("mailto:"+mail, mail, "mailto"))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// linkProcessor creates links for any HTTP or HTTPS URL not captured by
|
// linkProcessor creates links for any HTTP or HTTPS URL not captured by
|
||||||
// markdown.
|
// markdown.
|
||||||
func linkProcessor(ctx *postProcessCtx, node *html.Node) {
|
func linkProcessor(ctx *postProcessCtx, node *html.Node) {
|
||||||
m := common.LinkRegex.FindStringIndex(node.Data)
|
next := node.NextSibling
|
||||||
if m == nil {
|
for node != nil && node != next {
|
||||||
return
|
m := common.LinkRegex.FindStringIndex(node.Data)
|
||||||
|
if m == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
uri := node.Data[m[0]:m[1]]
|
||||||
|
replaceContent(node, m[0], m[1], createLink(uri, uri, "link"))
|
||||||
|
node = node.NextSibling.NextSibling
|
||||||
}
|
}
|
||||||
uri := node.Data[m[0]:m[1]]
|
|
||||||
replaceContent(node, m[0], m[1], createLink(uri, uri, "link"))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func genDefaultLinkProcessor(defaultLink string) processor {
|
func genDefaultLinkProcessor(defaultLink string) processor {
|
||||||
@@ -1053,12 +1115,17 @@ func genDefaultLinkProcessor(defaultLink string) processor {
|
|||||||
|
|
||||||
// descriptionLinkProcessor creates links for DescriptionHTML
|
// descriptionLinkProcessor creates links for DescriptionHTML
|
||||||
func descriptionLinkProcessor(ctx *postProcessCtx, node *html.Node) {
|
func descriptionLinkProcessor(ctx *postProcessCtx, node *html.Node) {
|
||||||
m := common.LinkRegex.FindStringIndex(node.Data)
|
next := node.NextSibling
|
||||||
if m == nil {
|
for node != nil && node != next {
|
||||||
return
|
m := common.LinkRegex.FindStringIndex(node.Data)
|
||||||
|
if m == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
uri := node.Data[m[0]:m[1]]
|
||||||
|
replaceContent(node, m[0], m[1], createDescriptionLink(uri, uri))
|
||||||
|
node = node.NextSibling.NextSibling
|
||||||
}
|
}
|
||||||
uri := node.Data[m[0]:m[1]]
|
|
||||||
replaceContent(node, m[0], m[1], createDescriptionLink(uri, uri))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func createDescriptionLink(href, content string) *html.Node {
|
func createDescriptionLink(href, content string) *html.Node {
|
||||||
|
|||||||
@@ -384,6 +384,32 @@ func TestRender_ShortLinks(t *testing.T) {
|
|||||||
`<p><a href="https://example.org" rel="nofollow">[[foobar]]</a></p>`)
|
`<p><a href="https://example.org" rel="nofollow">[[foobar]]</a></p>`)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestRender_RelativeImages(t *testing.T) {
|
||||||
|
setting.AppURL = AppURL
|
||||||
|
setting.AppSubURL = AppSubURL
|
||||||
|
tree := util.URLJoin(AppSubURL, "src", "master")
|
||||||
|
|
||||||
|
test := func(input, expected, expectedWiki string) {
|
||||||
|
buffer := markdown.RenderString(input, tree, localMetas)
|
||||||
|
assert.Equal(t, strings.TrimSpace(expected), strings.TrimSpace(buffer))
|
||||||
|
buffer = markdown.RenderWiki([]byte(input), setting.AppSubURL, localMetas)
|
||||||
|
assert.Equal(t, strings.TrimSpace(expectedWiki), strings.TrimSpace(buffer))
|
||||||
|
}
|
||||||
|
|
||||||
|
rawwiki := util.URLJoin(AppSubURL, "wiki", "raw")
|
||||||
|
mediatree := util.URLJoin(AppSubURL, "media", "master")
|
||||||
|
|
||||||
|
test(
|
||||||
|
`<img src="Link">`,
|
||||||
|
`<img src="`+util.URLJoin(mediatree, "Link")+`"/>`,
|
||||||
|
`<img src="`+util.URLJoin(rawwiki, "Link")+`"/>`)
|
||||||
|
|
||||||
|
test(
|
||||||
|
`<img src="./icon.png">`,
|
||||||
|
`<img src="`+util.URLJoin(mediatree, "icon.png")+`"/>`,
|
||||||
|
`<img src="`+util.URLJoin(rawwiki, "icon.png")+`"/>`)
|
||||||
|
}
|
||||||
|
|
||||||
func Test_ParseClusterFuzz(t *testing.T) {
|
func Test_ParseClusterFuzz(t *testing.T) {
|
||||||
setting.AppURL = AppURL
|
setting.AppURL = AppURL
|
||||||
setting.AppSubURL = AppSubURL
|
setting.AppSubURL = AppSubURL
|
||||||
@@ -408,3 +434,36 @@ func Test_ParseClusterFuzz(t *testing.T) {
|
|||||||
|
|
||||||
assert.NotContains(t, string(val), "<html")
|
assert.NotContains(t, string(val), "<html")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestIssue16020(t *testing.T) {
|
||||||
|
setting.AppURL = AppURL
|
||||||
|
setting.AppSubURL = AppSubURL
|
||||||
|
|
||||||
|
var localMetas = map[string]string{
|
||||||
|
"user": "go-gitea",
|
||||||
|
"repo": "gitea",
|
||||||
|
}
|
||||||
|
|
||||||
|
data := `<img src="data:image/png;base64,i//V"/>`
|
||||||
|
|
||||||
|
// func PostProcess(rawHTML []byte, urlPrefix string, metas map[string]string, isWikiMarkdown bool) ([]byte, error)
|
||||||
|
res, err := PostProcess([]byte(data), "https://example.com", localMetas, false)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Equal(t, data, string(res))
|
||||||
|
}
|
||||||
|
|
||||||
|
func BenchmarkEmojiPostprocess(b *testing.B) {
|
||||||
|
data := "🥰 "
|
||||||
|
for len(data) < 1<<16 {
|
||||||
|
data += data
|
||||||
|
}
|
||||||
|
b.ResetTimer()
|
||||||
|
for i := 0; i < b.N; i++ {
|
||||||
|
_, err := PostProcess(
|
||||||
|
[]byte(data),
|
||||||
|
"https://example.com",
|
||||||
|
localMetas,
|
||||||
|
false)
|
||||||
|
assert.NoError(b, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -50,9 +50,6 @@ func ReplaceSanitizer() {
|
|||||||
sanitizer.policy.AllowURLSchemes(setting.Markdown.CustomURLSchemes...)
|
sanitizer.policy.AllowURLSchemes(setting.Markdown.CustomURLSchemes...)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Allow keyword markup
|
|
||||||
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`^` + keywordClass + `$`)).OnElements("span")
|
|
||||||
|
|
||||||
// Allow classes for anchors
|
// Allow classes for anchors
|
||||||
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`ref-issue`)).OnElements("a")
|
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`ref-issue`)).OnElements("a")
|
||||||
|
|
||||||
@@ -68,8 +65,8 @@ func ReplaceSanitizer() {
|
|||||||
// Allow classes for emojis
|
// Allow classes for emojis
|
||||||
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`emoji`)).OnElements("img")
|
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`emoji`)).OnElements("img")
|
||||||
|
|
||||||
// Allow icons, emojis, and chroma syntax on span
|
// Allow icons, emojis, chroma syntax and keyword markup on span
|
||||||
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`^((icon(\s+[\p{L}\p{N}_-]+)+)|(emoji))$|^([a-z][a-z0-9]{0,2})$`)).OnElements("span")
|
sanitizer.policy.AllowAttrs("class").Matching(regexp.MustCompile(`^((icon(\s+[\p{L}\p{N}_-]+)+)|(emoji))$|^([a-z][a-z0-9]{0,2})$|^` + keywordClass + `$`)).OnElements("span")
|
||||||
|
|
||||||
// Allow generally safe attributes
|
// Allow generally safe attributes
|
||||||
generalSafeAttrs := []string{"abbr", "accept", "accept-charset",
|
generalSafeAttrs := []string{"abbr", "accept", "accept-charset",
|
||||||
|
|||||||
@@ -11,10 +11,13 @@ import "code.gitea.io/gitea/modules/structs"
|
|||||||
// this is for internal usage by migrations module and func who interact with it
|
// this is for internal usage by migrations module and func who interact with it
|
||||||
type MigrateOptions struct {
|
type MigrateOptions struct {
|
||||||
// required: true
|
// required: true
|
||||||
CloneAddr string `json:"clone_addr" binding:"Required"`
|
CloneAddr string `json:"clone_addr" binding:"Required"`
|
||||||
AuthUsername string `json:"auth_username"`
|
CloneAddrEncrypted string `json:"clone_addr_encrypted,omitempty"`
|
||||||
AuthPassword string `json:"auth_password"`
|
AuthUsername string `json:"auth_username"`
|
||||||
AuthToken string `json:"auth_token"`
|
AuthPassword string `json:"auth_password,omitempty"`
|
||||||
|
AuthPasswordEncrypted string `json:"auth_password_encrypted,omitempty"`
|
||||||
|
AuthToken string `json:"auth_token,omitempty"`
|
||||||
|
AuthTokenEncrypted string `json:"auth_token_encrypted,omitempty"`
|
||||||
// required: true
|
// required: true
|
||||||
UID int `json:"uid" binding:"Required"`
|
UID int `json:"uid" binding:"Required"`
|
||||||
// required: true
|
// required: true
|
||||||
|
|||||||
@@ -13,6 +13,7 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"code.gitea.io/gitea/models"
|
"code.gitea.io/gitea/models"
|
||||||
@@ -563,8 +564,42 @@ func DumpRepository(ctx context.Context, baseDir, ownerName string, opts base.Mi
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func updateOptionsUnits(opts *base.MigrateOptions, units []string) {
|
||||||
|
if len(units) == 0 {
|
||||||
|
opts.Wiki = true
|
||||||
|
opts.Issues = true
|
||||||
|
opts.Milestones = true
|
||||||
|
opts.Labels = true
|
||||||
|
opts.Releases = true
|
||||||
|
opts.Comments = true
|
||||||
|
opts.PullRequests = true
|
||||||
|
opts.ReleaseAssets = true
|
||||||
|
} else {
|
||||||
|
for _, unit := range units {
|
||||||
|
switch strings.ToLower(unit) {
|
||||||
|
case "wiki":
|
||||||
|
opts.Wiki = true
|
||||||
|
case "issues":
|
||||||
|
opts.Issues = true
|
||||||
|
case "milestones":
|
||||||
|
opts.Milestones = true
|
||||||
|
case "labels":
|
||||||
|
opts.Labels = true
|
||||||
|
case "releases":
|
||||||
|
opts.Releases = true
|
||||||
|
case "release_assets":
|
||||||
|
opts.ReleaseAssets = true
|
||||||
|
case "comments":
|
||||||
|
opts.Comments = true
|
||||||
|
case "pull_requests":
|
||||||
|
opts.PullRequests = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// RestoreRepository restore a repository from the disk directory
|
// RestoreRepository restore a repository from the disk directory
|
||||||
func RestoreRepository(ctx context.Context, baseDir string, ownerName, repoName string) error {
|
func RestoreRepository(ctx context.Context, baseDir string, ownerName, repoName string, units []string) error {
|
||||||
doer, err := models.GetAdminUser()
|
doer, err := models.GetAdminUser()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -580,17 +615,12 @@ func RestoreRepository(ctx context.Context, baseDir string, ownerName, repoName
|
|||||||
}
|
}
|
||||||
tp, _ := strconv.Atoi(opts["service_type"])
|
tp, _ := strconv.Atoi(opts["service_type"])
|
||||||
|
|
||||||
if err = migrateRepository(downloader, uploader, base.MigrateOptions{
|
var migrateOpts = base.MigrateOptions{
|
||||||
Wiki: true,
|
|
||||||
Issues: true,
|
|
||||||
Milestones: true,
|
|
||||||
Labels: true,
|
|
||||||
Releases: true,
|
|
||||||
Comments: true,
|
|
||||||
PullRequests: true,
|
|
||||||
ReleaseAssets: true,
|
|
||||||
GitServiceType: structs.GitServiceType(tp),
|
GitServiceType: structs.GitServiceType(tp),
|
||||||
}); err != nil {
|
}
|
||||||
|
updateOptionsUnits(&migrateOpts, units)
|
||||||
|
|
||||||
|
if err = migrateRepository(downloader, uploader, migrateOpts); err != nil {
|
||||||
if err1 := uploader.Rollback(); err1 != nil {
|
if err1 := uploader.Rollback(); err1 != nil {
|
||||||
log.Error("rollback failed: %v", err1)
|
log.Error("rollback failed: %v", err1)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -248,14 +248,16 @@ func (g *GiteaLocalUploader) CreateReleases(releases ...*base.Release) error {
|
|||||||
rel.OriginalAuthorID = release.PublisherID
|
rel.OriginalAuthorID = release.PublisherID
|
||||||
}
|
}
|
||||||
|
|
||||||
// calc NumCommits
|
// calc NumCommits if no draft
|
||||||
commit, err := g.gitRepo.GetCommit(rel.TagName)
|
if !release.Draft {
|
||||||
if err != nil {
|
commit, err := g.gitRepo.GetCommit(rel.TagName)
|
||||||
return fmt.Errorf("GetCommit: %v", err)
|
if err != nil {
|
||||||
}
|
return fmt.Errorf("GetCommit: %v", err)
|
||||||
rel.NumCommits, err = commit.CommitsCount()
|
}
|
||||||
if err != nil {
|
rel.NumCommits, err = commit.CommitsCount()
|
||||||
return fmt.Errorf("CommitsCount: %v", err)
|
if err != nil {
|
||||||
|
return fmt.Errorf("CommitsCount: %v", err)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, asset := range release.Assets {
|
for _, asset := range release.Assets {
|
||||||
@@ -268,9 +270,10 @@ func (g *GiteaLocalUploader) CreateReleases(releases ...*base.Release) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// download attachment
|
// download attachment
|
||||||
err = func() error {
|
err := func() error {
|
||||||
// asset.DownloadURL maybe a local file
|
// asset.DownloadURL maybe a local file
|
||||||
var rc io.ReadCloser
|
var rc io.ReadCloser
|
||||||
|
var err error
|
||||||
if asset.DownloadURL == nil {
|
if asset.DownloadURL == nil {
|
||||||
rc, err = asset.DownloadFunc()
|
rc, err = asset.DownloadFunc()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -849,6 +852,7 @@ func (g *GiteaLocalUploader) CreateReviews(reviews ...*base.Review) error {
|
|||||||
// Rollback when migrating failed, this will rollback all the changes.
|
// Rollback when migrating failed, this will rollback all the changes.
|
||||||
func (g *GiteaLocalUploader) Rollback() error {
|
func (g *GiteaLocalUploader) Rollback() error {
|
||||||
if g.repo != nil && g.repo.ID > 0 {
|
if g.repo != nil && g.repo.ID > 0 {
|
||||||
|
g.gitRepo.Close()
|
||||||
if err := models.DeleteRepository(g.doer, g.repo.OwnerID, g.repo.ID); err != nil {
|
if err := models.DeleteRepository(g.doer, g.repo.OwnerID, g.repo.ID); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -264,34 +264,29 @@ func (g *GithubDownloaderV3) GetLabels() ([]*base.Label, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (g *GithubDownloaderV3) convertGithubRelease(rel *github.RepositoryRelease) *base.Release {
|
func (g *GithubDownloaderV3) convertGithubRelease(rel *github.RepositoryRelease) *base.Release {
|
||||||
var (
|
|
||||||
name string
|
|
||||||
desc string
|
|
||||||
)
|
|
||||||
if rel.Body != nil {
|
|
||||||
desc = *rel.Body
|
|
||||||
}
|
|
||||||
if rel.Name != nil {
|
|
||||||
name = *rel.Name
|
|
||||||
}
|
|
||||||
|
|
||||||
var email string
|
|
||||||
if rel.Author.Email != nil {
|
|
||||||
email = *rel.Author.Email
|
|
||||||
}
|
|
||||||
|
|
||||||
r := &base.Release{
|
r := &base.Release{
|
||||||
TagName: *rel.TagName,
|
TagName: *rel.TagName,
|
||||||
TargetCommitish: *rel.TargetCommitish,
|
TargetCommitish: *rel.TargetCommitish,
|
||||||
Name: name,
|
|
||||||
Body: desc,
|
|
||||||
Draft: *rel.Draft,
|
Draft: *rel.Draft,
|
||||||
Prerelease: *rel.Prerelease,
|
Prerelease: *rel.Prerelease,
|
||||||
Created: rel.CreatedAt.Time,
|
Created: rel.CreatedAt.Time,
|
||||||
PublisherID: *rel.Author.ID,
|
PublisherID: *rel.Author.ID,
|
||||||
PublisherName: *rel.Author.Login,
|
PublisherName: *rel.Author.Login,
|
||||||
PublisherEmail: email,
|
}
|
||||||
Published: rel.PublishedAt.Time,
|
|
||||||
|
if rel.Body != nil {
|
||||||
|
r.Body = *rel.Body
|
||||||
|
}
|
||||||
|
if rel.Name != nil {
|
||||||
|
r.Name = *rel.Name
|
||||||
|
}
|
||||||
|
|
||||||
|
if rel.Author.Email != nil {
|
||||||
|
r.PublisherEmail = *rel.Author.Email
|
||||||
|
}
|
||||||
|
|
||||||
|
if rel.PublishedAt != nil {
|
||||||
|
r.Published = rel.PublishedAt.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, asset := range rel.Assets {
|
for _, asset := range rel.Assets {
|
||||||
@@ -306,18 +301,17 @@ func (g *GithubDownloaderV3) convertGithubRelease(rel *github.RepositoryRelease)
|
|||||||
Updated: asset.UpdatedAt.Time,
|
Updated: asset.UpdatedAt.Time,
|
||||||
DownloadFunc: func() (io.ReadCloser, error) {
|
DownloadFunc: func() (io.ReadCloser, error) {
|
||||||
g.sleep()
|
g.sleep()
|
||||||
asset, redir, err := g.client.Repositories.DownloadReleaseAsset(g.ctx, g.repoOwner, g.repoName, assetID, nil)
|
asset, redirectURL, err := g.client.Repositories.DownloadReleaseAsset(g.ctx, g.repoOwner, g.repoName, assetID, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
err = g.RefreshRate()
|
if err := g.RefreshRate(); err != nil {
|
||||||
if err != nil {
|
|
||||||
log.Error("g.client.RateLimits: %s", err)
|
log.Error("g.client.RateLimits: %s", err)
|
||||||
}
|
}
|
||||||
if asset == nil {
|
if asset == nil {
|
||||||
if redir != "" {
|
if redirectURL != "" {
|
||||||
g.sleep()
|
g.sleep()
|
||||||
req, err := http.NewRequestWithContext(g.ctx, "GET", redir, nil)
|
req, err := http.NewRequestWithContext(g.ctx, "GET", redirectURL, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -54,7 +54,6 @@ func (m *mailNotifier) NotifyNewIssue(issue *models.Issue, mentions []*models.Us
|
|||||||
|
|
||||||
func (m *mailNotifier) NotifyIssueChangeStatus(doer *models.User, issue *models.Issue, actionComment *models.Comment, isClosed bool) {
|
func (m *mailNotifier) NotifyIssueChangeStatus(doer *models.User, issue *models.Issue, actionComment *models.Comment, isClosed bool) {
|
||||||
var actionType models.ActionType
|
var actionType models.ActionType
|
||||||
issue.Content = ""
|
|
||||||
if issue.IsPull {
|
if issue.IsPull {
|
||||||
if isClosed {
|
if isClosed {
|
||||||
actionType = models.ActionClosePullRequest
|
actionType = models.ActionClosePullRequest
|
||||||
@@ -120,7 +119,6 @@ func (m *mailNotifier) NotifyMergePullRequest(pr *models.PullRequest, doer *mode
|
|||||||
log.Error("pr.LoadIssue: %v", err)
|
log.Error("pr.LoadIssue: %v", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
pr.Issue.Content = ""
|
|
||||||
if err := mailer.MailParticipants(pr.Issue, doer, models.ActionMergePullRequest, nil); err != nil {
|
if err := mailer.MailParticipants(pr.Issue, doer, models.ActionMergePullRequest, nil); err != nil {
|
||||||
log.Error("MailParticipants: %v", err)
|
log.Error("MailParticipants: %v", err)
|
||||||
}
|
}
|
||||||
@@ -147,7 +145,6 @@ func (m *mailNotifier) NotifyPullRequestPushCommits(doer *models.User, pr *model
|
|||||||
if err := comment.LoadPushCommits(); err != nil {
|
if err := comment.LoadPushCommits(); err != nil {
|
||||||
log.Error("comment.LoadPushCommits: %v", err)
|
log.Error("comment.LoadPushCommits: %v", err)
|
||||||
}
|
}
|
||||||
comment.Content = ""
|
|
||||||
|
|
||||||
m.NotifyCreateIssueComment(doer, comment.Issue.Repo, comment.Issue, comment, nil)
|
m.NotifyCreateIssueComment(doer, comment.Issue.Repo, comment.Issue, comment, nil)
|
||||||
}
|
}
|
||||||
|
|||||||
60
modules/private/restore_repo.go
Normal file
60
modules/private/restore_repo.go
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
// Copyright 2020 The Gitea Authors. All rights reserved.
|
||||||
|
// Use of this source code is governed by a MIT-style
|
||||||
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
package private
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io/ioutil"
|
||||||
|
"net/http"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"code.gitea.io/gitea/modules/setting"
|
||||||
|
jsoniter "github.com/json-iterator/go"
|
||||||
|
)
|
||||||
|
|
||||||
|
// RestoreParams structure holds a data for restore repository
|
||||||
|
type RestoreParams struct {
|
||||||
|
RepoDir string
|
||||||
|
OwnerName string
|
||||||
|
RepoName string
|
||||||
|
Units []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// RestoreRepo calls the internal RestoreRepo function
|
||||||
|
func RestoreRepo(repoDir, ownerName, repoName string, units []string) (int, string) {
|
||||||
|
reqURL := setting.LocalURL + "api/internal/restore_repo"
|
||||||
|
|
||||||
|
req := newInternalRequest(reqURL, "POST")
|
||||||
|
req.SetTimeout(3*time.Second, 0) // since the request will spend much time, don't timeout
|
||||||
|
req = req.Header("Content-Type", "application/json")
|
||||||
|
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
||||||
|
jsonBytes, _ := json.Marshal(RestoreParams{
|
||||||
|
RepoDir: repoDir,
|
||||||
|
OwnerName: ownerName,
|
||||||
|
RepoName: repoName,
|
||||||
|
Units: units,
|
||||||
|
})
|
||||||
|
req.Body(jsonBytes)
|
||||||
|
resp, err := req.Response()
|
||||||
|
if err != nil {
|
||||||
|
return http.StatusInternalServerError, fmt.Sprintf("Unable to contact gitea: %v, could you confirm it's running?", err.Error())
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
if resp.StatusCode != 200 {
|
||||||
|
var ret = struct {
|
||||||
|
Err string `json:"err"`
|
||||||
|
}{}
|
||||||
|
body, err := ioutil.ReadAll(resp.Body)
|
||||||
|
if err != nil {
|
||||||
|
return http.StatusInternalServerError, fmt.Sprintf("Response body error: %v", err.Error())
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(body, &ret); err != nil {
|
||||||
|
return http.StatusInternalServerError, fmt.Sprintf("Response body Unmarshal error: %v", err.Error())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return http.StatusOK, fmt.Sprintf("Restore repo %s/%s successfully", ownerName, repoName)
|
||||||
|
}
|
||||||
@@ -198,17 +198,20 @@ func (m *Manager) FlushAll(baseCtx context.Context, timeout time.Duration) error
|
|||||||
wg.Done()
|
wg.Done()
|
||||||
}(mq)
|
}(mq)
|
||||||
} else {
|
} else {
|
||||||
log.Debug("Queue: %s is non-empty but is not flushable - adding 100 millisecond wait", mq.Name)
|
log.Debug("Queue: %s is non-empty but is not flushable", mq.Name)
|
||||||
go func() {
|
wg.Done()
|
||||||
<-time.After(100 * time.Millisecond)
|
|
||||||
wg.Done()
|
|
||||||
}()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
if allEmpty {
|
if allEmpty {
|
||||||
|
log.Debug("All queues are empty")
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
|
// Ensure there are always at least 100ms between loops but not more if we've actually been doing some flushign
|
||||||
|
// but don't delay cancellation here.
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
case <-time.After(100 * time.Millisecond):
|
||||||
|
}
|
||||||
wg.Wait()
|
wg.Wait()
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
|
|||||||
@@ -5,6 +5,7 @@
|
|||||||
package references
|
package references
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
"net/url"
|
"net/url"
|
||||||
"regexp"
|
"regexp"
|
||||||
"strconv"
|
"strconv"
|
||||||
@@ -14,6 +15,8 @@ import (
|
|||||||
"code.gitea.io/gitea/modules/log"
|
"code.gitea.io/gitea/modules/log"
|
||||||
"code.gitea.io/gitea/modules/markup/mdstripper"
|
"code.gitea.io/gitea/modules/markup/mdstripper"
|
||||||
"code.gitea.io/gitea/modules/setting"
|
"code.gitea.io/gitea/modules/setting"
|
||||||
|
|
||||||
|
"github.com/yuin/goldmark/util"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@@ -321,7 +324,7 @@ func FindRenderizableReferenceNumeric(content string, prOnly bool) (bool, *Rende
|
|||||||
return false, nil
|
return false, nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
r := getCrossReference([]byte(content), match[2], match[3], false, prOnly)
|
r := getCrossReference(util.StringToReadOnlyBytes(content), match[2], match[3], false, prOnly)
|
||||||
if r == nil {
|
if r == nil {
|
||||||
return false, nil
|
return false, nil
|
||||||
}
|
}
|
||||||
@@ -465,17 +468,16 @@ func findAllIssueReferencesBytes(content []byte, links []string) []*rawReference
|
|||||||
}
|
}
|
||||||
|
|
||||||
func getCrossReference(content []byte, start, end int, fromLink bool, prOnly bool) *rawReference {
|
func getCrossReference(content []byte, start, end int, fromLink bool, prOnly bool) *rawReference {
|
||||||
refid := string(content[start:end])
|
sep := bytes.IndexAny(content[start:end], "#!")
|
||||||
sep := strings.IndexAny(refid, "#!")
|
|
||||||
if sep < 0 {
|
if sep < 0 {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
isPull := refid[sep] == '!'
|
isPull := content[start+sep] == '!'
|
||||||
if prOnly && !isPull {
|
if prOnly && !isPull {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
repo := refid[:sep]
|
repo := string(content[start : start+sep])
|
||||||
issue := refid[sep+1:]
|
issue := string(content[start+sep+1 : end])
|
||||||
index, err := strconv.ParseInt(issue, 10, 64)
|
index, err := strconv.ParseInt(issue, 10, 64)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil
|
return nil
|
||||||
|
|||||||
@@ -117,6 +117,8 @@ var (
|
|||||||
GracefulRestartable bool
|
GracefulRestartable bool
|
||||||
GracefulHammerTime time.Duration
|
GracefulHammerTime time.Duration
|
||||||
StartupTimeout time.Duration
|
StartupTimeout time.Duration
|
||||||
|
PerWriteTimeout = 30 * time.Second
|
||||||
|
PerWritePerKbTimeout = 10 * time.Second
|
||||||
StaticURLPrefix string
|
StaticURLPrefix string
|
||||||
AbsoluteAssetURL string
|
AbsoluteAssetURL string
|
||||||
|
|
||||||
@@ -147,18 +149,22 @@ var (
|
|||||||
TrustedUserCAKeys []string `ini:"SSH_TRUSTED_USER_CA_KEYS"`
|
TrustedUserCAKeys []string `ini:"SSH_TRUSTED_USER_CA_KEYS"`
|
||||||
TrustedUserCAKeysFile string `ini:"SSH_TRUSTED_USER_CA_KEYS_FILENAME"`
|
TrustedUserCAKeysFile string `ini:"SSH_TRUSTED_USER_CA_KEYS_FILENAME"`
|
||||||
TrustedUserCAKeysParsed []gossh.PublicKey `ini:"-"`
|
TrustedUserCAKeysParsed []gossh.PublicKey `ini:"-"`
|
||||||
|
PerWriteTimeout time.Duration `ini:"SSH_PER_WRITE_TIMEOUT"`
|
||||||
|
PerWritePerKbTimeout time.Duration `ini:"SSH_PER_WRITE_PER_KB_TIMEOUT"`
|
||||||
}{
|
}{
|
||||||
Disabled: false,
|
Disabled: false,
|
||||||
StartBuiltinServer: false,
|
StartBuiltinServer: false,
|
||||||
Domain: "",
|
Domain: "",
|
||||||
Port: 22,
|
Port: 22,
|
||||||
ServerCiphers: []string{"aes128-ctr", "aes192-ctr", "aes256-ctr", "aes128-gcm@openssh.com", "arcfour256", "arcfour128"},
|
ServerCiphers: []string{"aes128-ctr", "aes192-ctr", "aes256-ctr", "aes128-gcm@openssh.com", "arcfour256", "arcfour128"},
|
||||||
ServerKeyExchanges: []string{"diffie-hellman-group1-sha1", "diffie-hellman-group14-sha1", "ecdh-sha2-nistp256", "ecdh-sha2-nistp384", "ecdh-sha2-nistp521", "curve25519-sha256@libssh.org"},
|
ServerKeyExchanges: []string{"diffie-hellman-group1-sha1", "diffie-hellman-group14-sha1", "ecdh-sha2-nistp256", "ecdh-sha2-nistp384", "ecdh-sha2-nistp521", "curve25519-sha256@libssh.org"},
|
||||||
ServerMACs: []string{"hmac-sha2-256-etm@openssh.com", "hmac-sha2-256", "hmac-sha1", "hmac-sha1-96"},
|
ServerMACs: []string{"hmac-sha2-256-etm@openssh.com", "hmac-sha2-256", "hmac-sha1", "hmac-sha1-96"},
|
||||||
KeygenPath: "ssh-keygen",
|
KeygenPath: "ssh-keygen",
|
||||||
MinimumKeySizeCheck: true,
|
MinimumKeySizeCheck: true,
|
||||||
MinimumKeySizes: map[string]int{"ed25519": 256, "ed25519-sk": 256, "ecdsa": 256, "ecdsa-sk": 256, "rsa": 2048},
|
MinimumKeySizes: map[string]int{"ed25519": 256, "ed25519-sk": 256, "ecdsa": 256, "ecdsa-sk": 256, "rsa": 2048},
|
||||||
ServerHostKeys: []string{"ssh/gitea.rsa", "ssh/gogs.rsa"},
|
ServerHostKeys: []string{"ssh/gitea.rsa", "ssh/gogs.rsa"},
|
||||||
|
PerWriteTimeout: PerWriteTimeout,
|
||||||
|
PerWritePerKbTimeout: PerWritePerKbTimeout,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Security settings
|
// Security settings
|
||||||
@@ -607,6 +613,8 @@ func NewContext() {
|
|||||||
GracefulRestartable = sec.Key("ALLOW_GRACEFUL_RESTARTS").MustBool(true)
|
GracefulRestartable = sec.Key("ALLOW_GRACEFUL_RESTARTS").MustBool(true)
|
||||||
GracefulHammerTime = sec.Key("GRACEFUL_HAMMER_TIME").MustDuration(60 * time.Second)
|
GracefulHammerTime = sec.Key("GRACEFUL_HAMMER_TIME").MustDuration(60 * time.Second)
|
||||||
StartupTimeout = sec.Key("STARTUP_TIMEOUT").MustDuration(0 * time.Second)
|
StartupTimeout = sec.Key("STARTUP_TIMEOUT").MustDuration(0 * time.Second)
|
||||||
|
PerWriteTimeout = sec.Key("PER_WRITE_TIMEOUT").MustDuration(PerWriteTimeout)
|
||||||
|
PerWritePerKbTimeout = sec.Key("PER_WRITE_PER_KB_TIMEOUT").MustDuration(PerWritePerKbTimeout)
|
||||||
|
|
||||||
defaultAppURL := string(Protocol) + "://" + Domain
|
defaultAppURL := string(Protocol) + "://" + Domain
|
||||||
if (Protocol == HTTP && HTTPPort != "80") || (Protocol == HTTPS && HTTPPort != "443") {
|
if (Protocol == HTTP && HTTPPort != "80") || (Protocol == HTTPS && HTTPPort != "443") {
|
||||||
@@ -772,6 +780,8 @@ func NewContext() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
SSH.ExposeAnonymous = sec.Key("SSH_EXPOSE_ANONYMOUS").MustBool(false)
|
SSH.ExposeAnonymous = sec.Key("SSH_EXPOSE_ANONYMOUS").MustBool(false)
|
||||||
|
SSH.PerWriteTimeout = sec.Key("SSH_PER_WRITE_TIMEOUT").MustDuration(PerWriteTimeout)
|
||||||
|
SSH.PerWritePerKbTimeout = sec.Key("SSH_PER_WRITE_PER_KB_TIMEOUT").MustDuration(PerWritePerKbTimeout)
|
||||||
|
|
||||||
if err = Cfg.Section("oauth2").MapTo(&OAuth2); err != nil {
|
if err = Cfg.Section("oauth2").MapTo(&OAuth2); err != nil {
|
||||||
log.Fatal("Failed to OAuth2 settings: %v", err)
|
log.Fatal("Failed to OAuth2 settings: %v", err)
|
||||||
|
|||||||
@@ -7,12 +7,15 @@ package ssh
|
|||||||
import (
|
import (
|
||||||
"code.gitea.io/gitea/modules/graceful"
|
"code.gitea.io/gitea/modules/graceful"
|
||||||
"code.gitea.io/gitea/modules/log"
|
"code.gitea.io/gitea/modules/log"
|
||||||
|
"code.gitea.io/gitea/modules/setting"
|
||||||
|
|
||||||
"github.com/gliderlabs/ssh"
|
"github.com/gliderlabs/ssh"
|
||||||
)
|
)
|
||||||
|
|
||||||
func listen(server *ssh.Server) {
|
func listen(server *ssh.Server) {
|
||||||
gracefulServer := graceful.NewServer("tcp", server.Addr, "SSH")
|
gracefulServer := graceful.NewServer("tcp", server.Addr, "SSH")
|
||||||
|
gracefulServer.PerWriteTimeout = setting.SSH.PerWriteTimeout
|
||||||
|
gracefulServer.PerWritePerKbTimeout = setting.SSH.PerWritePerKbTimeout
|
||||||
|
|
||||||
err := gracefulServer.ListenAndServe(server.Serve)
|
err := gracefulServer.ListenAndServe(server.Serve)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
@@ -31,6 +31,8 @@ type CreateOrgOption struct {
|
|||||||
RepoAdminChangeTeamAccess bool `json:"repo_admin_change_team_access"`
|
RepoAdminChangeTeamAccess bool `json:"repo_admin_change_team_access"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TODO: make EditOrgOption fields optional after https://gitea.com/go-chi/binding/pulls/5 got merged
|
||||||
|
|
||||||
// EditOrgOption options for editing an organization
|
// EditOrgOption options for editing an organization
|
||||||
type EditOrgOption struct {
|
type EditOrgOption struct {
|
||||||
FullName string `json:"full_name"`
|
FullName string `json:"full_name"`
|
||||||
@@ -40,5 +42,5 @@ type EditOrgOption struct {
|
|||||||
// possible values are `public`, `limited` or `private`
|
// possible values are `public`, `limited` or `private`
|
||||||
// enum: public,limited,private
|
// enum: public,limited,private
|
||||||
Visibility string `json:"visibility" binding:"In(,public,limited,private)"`
|
Visibility string `json:"visibility" binding:"In(,public,limited,private)"`
|
||||||
RepoAdminChangeTeamAccess bool `json:"repo_admin_change_team_access"`
|
RepoAdminChangeTeamAccess *bool `json:"repo_admin_change_team_access"`
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -13,8 +13,11 @@ import (
|
|||||||
"code.gitea.io/gitea/modules/migrations/base"
|
"code.gitea.io/gitea/modules/migrations/base"
|
||||||
"code.gitea.io/gitea/modules/queue"
|
"code.gitea.io/gitea/modules/queue"
|
||||||
repo_module "code.gitea.io/gitea/modules/repository"
|
repo_module "code.gitea.io/gitea/modules/repository"
|
||||||
|
"code.gitea.io/gitea/modules/secret"
|
||||||
|
"code.gitea.io/gitea/modules/setting"
|
||||||
"code.gitea.io/gitea/modules/structs"
|
"code.gitea.io/gitea/modules/structs"
|
||||||
"code.gitea.io/gitea/modules/timeutil"
|
"code.gitea.io/gitea/modules/timeutil"
|
||||||
|
"code.gitea.io/gitea/modules/util"
|
||||||
jsoniter "github.com/json-iterator/go"
|
jsoniter "github.com/json-iterator/go"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -65,6 +68,24 @@ func MigrateRepository(doer, u *models.User, opts base.MigrateOptions) error {
|
|||||||
|
|
||||||
// CreateMigrateTask creates a migrate task
|
// CreateMigrateTask creates a migrate task
|
||||||
func CreateMigrateTask(doer, u *models.User, opts base.MigrateOptions) (*models.Task, error) {
|
func CreateMigrateTask(doer, u *models.User, opts base.MigrateOptions) (*models.Task, error) {
|
||||||
|
// encrypt credentials for persistence
|
||||||
|
var err error
|
||||||
|
opts.CloneAddrEncrypted, err = secret.EncryptSecret(setting.SecretKey, opts.CloneAddr)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
opts.CloneAddr = util.SanitizeURLCredentials(opts.CloneAddr, true)
|
||||||
|
opts.AuthPasswordEncrypted, err = secret.EncryptSecret(setting.SecretKey, opts.AuthPassword)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
opts.AuthPassword = ""
|
||||||
|
opts.AuthTokenEncrypted, err = secret.EncryptSecret(setting.SecretKey, opts.AuthToken)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
opts.AuthToken = ""
|
||||||
|
|
||||||
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
||||||
bs, err := json.Marshal(&opts)
|
bs, err := json.Marshal(&opts)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
@@ -149,7 +149,7 @@ func SetCookie(resp http.ResponseWriter, name string, value string, others ...in
|
|||||||
if len(others) > 2 {
|
if len(others) > 2 {
|
||||||
if v, ok := others[2].(string); ok && len(v) > 0 {
|
if v, ok := others[2].(string); ok && len(v) > 0 {
|
||||||
cookie.Domain = v
|
cookie.Domain = v
|
||||||
} else if v, ok := others[1].(func(*http.Cookie)); ok {
|
} else if v, ok := others[2].(func(*http.Cookie)); ok {
|
||||||
v(&cookie)
|
v(&cookie)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -170,7 +170,7 @@ func SetCookie(resp http.ResponseWriter, name string, value string, others ...in
|
|||||||
if len(others) > 4 {
|
if len(others) > 4 {
|
||||||
if v, ok := others[4].(bool); ok && v {
|
if v, ok := others[4].(bool); ok && v {
|
||||||
cookie.HttpOnly = true
|
cookie.HttpOnly = true
|
||||||
} else if v, ok := others[1].(func(*http.Cookie)); ok {
|
} else if v, ok := others[4].(func(*http.Cookie)); ok {
|
||||||
v(&cookie)
|
v(&cookie)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -179,7 +179,7 @@ func SetCookie(resp http.ResponseWriter, name string, value string, others ...in
|
|||||||
if v, ok := others[5].(time.Time); ok {
|
if v, ok := others[5].(time.Time); ok {
|
||||||
cookie.Expires = v
|
cookie.Expires = v
|
||||||
cookie.RawExpires = v.Format(time.UnixDate)
|
cookie.RawExpires = v.Format(time.UnixDate)
|
||||||
} else if v, ok := others[1].(func(*http.Cookie)); ok {
|
} else if v, ok := others[5].(func(*http.Cookie)); ok {
|
||||||
v(&cookie)
|
v(&cookie)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2281,6 +2281,7 @@ auths.allowed_domains_helper = Leave empty to allow all domains. Separate multip
|
|||||||
auths.enable_tls = Enable TLS Encryption
|
auths.enable_tls = Enable TLS Encryption
|
||||||
auths.skip_tls_verify = Skip TLS Verify
|
auths.skip_tls_verify = Skip TLS Verify
|
||||||
auths.pam_service_name = PAM Service Name
|
auths.pam_service_name = PAM Service Name
|
||||||
|
auths.pam_email_domain = PAM Email Domain (optional)
|
||||||
auths.oauth2_provider = OAuth2 Provider
|
auths.oauth2_provider = OAuth2 Provider
|
||||||
auths.oauth2_icon_url = Icon URL
|
auths.oauth2_icon_url = Icon URL
|
||||||
auths.oauth2_clientID = Client ID (Key)
|
auths.oauth2_clientID = Client ID (Key)
|
||||||
|
|||||||
@@ -239,6 +239,7 @@ func NewAuthSourcePost(ctx *context.Context) {
|
|||||||
case models.LoginPAM:
|
case models.LoginPAM:
|
||||||
config = &models.PAMConfig{
|
config = &models.PAMConfig{
|
||||||
ServiceName: form.PAMServiceName,
|
ServiceName: form.PAMServiceName,
|
||||||
|
EmailDomain: form.PAMEmailDomain,
|
||||||
}
|
}
|
||||||
case models.LoginOAuth2:
|
case models.LoginOAuth2:
|
||||||
config = parseOAuth2Config(form)
|
config = parseOAuth2Config(form)
|
||||||
@@ -346,6 +347,7 @@ func EditAuthSourcePost(ctx *context.Context) {
|
|||||||
case models.LoginPAM:
|
case models.LoginPAM:
|
||||||
config = &models.PAMConfig{
|
config = &models.PAMConfig{
|
||||||
ServiceName: form.PAMServiceName,
|
ServiceName: form.PAMServiceName,
|
||||||
|
EmailDomain: form.PAMEmailDomain,
|
||||||
}
|
}
|
||||||
case models.LoginOAuth2:
|
case models.LoginOAuth2:
|
||||||
config = parseOAuth2Config(form)
|
config = parseOAuth2Config(form)
|
||||||
|
|||||||
@@ -46,6 +46,10 @@ func DeleteRepo(ctx *context.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ctx.Repo != nil && ctx.Repo.GitRepo != nil && ctx.Repo.Repository != nil && ctx.Repo.Repository.ID == repo.ID {
|
||||||
|
ctx.Repo.GitRepo.Close()
|
||||||
|
}
|
||||||
|
|
||||||
if err := repo_service.DeleteRepository(ctx.User, repo); err != nil {
|
if err := repo_service.DeleteRepository(ctx.User, repo); err != nil {
|
||||||
ctx.ServerError("DeleteRepository", err)
|
ctx.ServerError("DeleteRepository", err)
|
||||||
return
|
return
|
||||||
|
|||||||
@@ -557,6 +557,7 @@ func Routes() *web.Route {
|
|||||||
Gclifetime: setting.SessionConfig.Gclifetime,
|
Gclifetime: setting.SessionConfig.Gclifetime,
|
||||||
Maxlifetime: setting.SessionConfig.Maxlifetime,
|
Maxlifetime: setting.SessionConfig.Maxlifetime,
|
||||||
Secure: setting.SessionConfig.Secure,
|
Secure: setting.SessionConfig.Secure,
|
||||||
|
SameSite: setting.SessionConfig.SameSite,
|
||||||
Domain: setting.SessionConfig.Domain,
|
Domain: setting.SessionConfig.Domain,
|
||||||
}))
|
}))
|
||||||
m.Use(securityHeaders())
|
m.Use(securityHeaders())
|
||||||
@@ -892,7 +893,7 @@ func Routes() *web.Route {
|
|||||||
Post(reqToken(), mustNotBeArchived, bind(api.CreatePullRequestOption{}), repo.CreatePullRequest)
|
Post(reqToken(), mustNotBeArchived, bind(api.CreatePullRequestOption{}), repo.CreatePullRequest)
|
||||||
m.Group("/{index}", func() {
|
m.Group("/{index}", func() {
|
||||||
m.Combo("").Get(repo.GetPullRequest).
|
m.Combo("").Get(repo.GetPullRequest).
|
||||||
Patch(reqToken(), reqRepoWriter(models.UnitTypePullRequests), bind(api.EditPullRequestOption{}), repo.EditPullRequest)
|
Patch(reqToken(), bind(api.EditPullRequestOption{}), repo.EditPullRequest)
|
||||||
m.Get(".diff", repo.DownloadPullDiff)
|
m.Get(".diff", repo.DownloadPullDiff)
|
||||||
m.Get(".patch", repo.DownloadPullPatch)
|
m.Get(".patch", repo.DownloadPullPatch)
|
||||||
m.Post("/update", reqToken(), repo.UpdatePullRequest)
|
m.Post("/update", reqToken(), repo.UpdatePullRequest)
|
||||||
@@ -985,10 +986,10 @@ func Routes() *web.Route {
|
|||||||
Delete(reqToken(), reqOrgMembership(), org.ConcealMember)
|
Delete(reqToken(), reqOrgMembership(), org.ConcealMember)
|
||||||
})
|
})
|
||||||
m.Group("/teams", func() {
|
m.Group("/teams", func() {
|
||||||
m.Combo("", reqToken()).Get(org.ListTeams).
|
m.Get("", org.ListTeams)
|
||||||
Post(reqOrgOwnership(), bind(api.CreateTeamOption{}), org.CreateTeam)
|
m.Post("", reqOrgOwnership(), bind(api.CreateTeamOption{}), org.CreateTeam)
|
||||||
m.Get("/search", org.SearchTeam)
|
m.Get("/search", org.SearchTeam)
|
||||||
}, reqOrgMembership())
|
}, reqToken(), reqOrgMembership())
|
||||||
m.Group("/labels", func() {
|
m.Group("/labels", func() {
|
||||||
m.Get("", org.ListLabels)
|
m.Get("", org.ListLabels)
|
||||||
m.Post("", reqToken(), reqOrgOwnership(), bind(api.CreateLabelOption{}), org.CreateLabel)
|
m.Post("", reqToken(), reqOrgOwnership(), bind(api.CreateLabelOption{}), org.CreateLabel)
|
||||||
|
|||||||
@@ -264,7 +264,13 @@ func Edit(ctx *context.APIContext) {
|
|||||||
if form.Visibility != "" {
|
if form.Visibility != "" {
|
||||||
org.Visibility = api.VisibilityModes[form.Visibility]
|
org.Visibility = api.VisibilityModes[form.Visibility]
|
||||||
}
|
}
|
||||||
if err := models.UpdateUserCols(org, "full_name", "description", "website", "location", "visibility"); err != nil {
|
if form.RepoAdminChangeTeamAccess != nil {
|
||||||
|
org.RepoAdminChangeTeamAccess = *form.RepoAdminChangeTeamAccess
|
||||||
|
}
|
||||||
|
if err := models.UpdateUserCols(org,
|
||||||
|
"full_name", "description", "website", "location",
|
||||||
|
"visibility", "repo_admin_change_team_access",
|
||||||
|
); err != nil {
|
||||||
ctx.Error(http.StatusInternalServerError, "EditOrganization", err)
|
ctx.Error(http.StatusInternalServerError, "EditOrganization", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,6 +6,7 @@
|
|||||||
package repo
|
package repo
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
|
||||||
@@ -13,7 +14,6 @@ import (
|
|||||||
"code.gitea.io/gitea/modules/context"
|
"code.gitea.io/gitea/modules/context"
|
||||||
"code.gitea.io/gitea/modules/convert"
|
"code.gitea.io/gitea/modules/convert"
|
||||||
"code.gitea.io/gitea/modules/git"
|
"code.gitea.io/gitea/modules/git"
|
||||||
"code.gitea.io/gitea/modules/log"
|
|
||||||
repo_module "code.gitea.io/gitea/modules/repository"
|
repo_module "code.gitea.io/gitea/modules/repository"
|
||||||
api "code.gitea.io/gitea/modules/structs"
|
api "code.gitea.io/gitea/modules/structs"
|
||||||
"code.gitea.io/gitea/modules/web"
|
"code.gitea.io/gitea/modules/web"
|
||||||
@@ -117,62 +117,20 @@ func DeleteBranch(ctx *context.APIContext) {
|
|||||||
|
|
||||||
branchName := ctx.Params("*")
|
branchName := ctx.Params("*")
|
||||||
|
|
||||||
if ctx.Repo.Repository.DefaultBranch == branchName {
|
if err := repo_service.DeleteBranch(ctx.User, ctx.Repo.Repository, ctx.Repo.GitRepo, branchName); err != nil {
|
||||||
ctx.Error(http.StatusForbidden, "DefaultBranch", fmt.Errorf("can not delete default branch"))
|
switch {
|
||||||
return
|
case git.IsErrBranchNotExist(err):
|
||||||
}
|
|
||||||
|
|
||||||
isProtected, err := ctx.Repo.Repository.IsProtectedBranch(branchName, ctx.User)
|
|
||||||
if err != nil {
|
|
||||||
ctx.InternalServerError(err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if isProtected {
|
|
||||||
ctx.Error(http.StatusForbidden, "IsProtectedBranch", fmt.Errorf("branch protected"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
branch, err := repo_module.GetBranch(ctx.Repo.Repository, branchName)
|
|
||||||
if err != nil {
|
|
||||||
if git.IsErrBranchNotExist(err) {
|
|
||||||
ctx.NotFound(err)
|
ctx.NotFound(err)
|
||||||
} else {
|
case errors.Is(err, repo_service.ErrBranchIsDefault):
|
||||||
ctx.Error(http.StatusInternalServerError, "GetBranch", err)
|
ctx.Error(http.StatusForbidden, "DefaultBranch", fmt.Errorf("can not delete default branch"))
|
||||||
|
case errors.Is(err, repo_service.ErrBranchIsProtected):
|
||||||
|
ctx.Error(http.StatusForbidden, "IsProtectedBranch", fmt.Errorf("branch protected"))
|
||||||
|
default:
|
||||||
|
ctx.Error(http.StatusInternalServerError, "DeleteBranch", err)
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
c, err := branch.GetCommit()
|
|
||||||
if err != nil {
|
|
||||||
ctx.Error(http.StatusInternalServerError, "GetCommit", err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := ctx.Repo.GitRepo.DeleteBranch(branchName, git.DeleteBranchOptions{
|
|
||||||
Force: true,
|
|
||||||
}); err != nil {
|
|
||||||
ctx.Error(http.StatusInternalServerError, "DeleteBranch", err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Don't return error below this
|
|
||||||
if err := repo_service.PushUpdate(
|
|
||||||
&repo_module.PushUpdateOptions{
|
|
||||||
RefFullName: git.BranchPrefix + branchName,
|
|
||||||
OldCommitID: c.ID.String(),
|
|
||||||
NewCommitID: git.EmptySHA,
|
|
||||||
PusherID: ctx.User.ID,
|
|
||||||
PusherName: ctx.User.Name,
|
|
||||||
RepoUserName: ctx.Repo.Owner.Name,
|
|
||||||
RepoName: ctx.Repo.Repository.Name,
|
|
||||||
}); err != nil {
|
|
||||||
log.Error("Update: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := ctx.Repo.Repository.AddDeletedBranch(branchName, c.ID.String(), ctx.User.ID); err != nil {
|
|
||||||
log.Warn("AddDeletedBranch: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx.Status(http.StatusNoContent)
|
ctx.Status(http.StatusNoContent)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -885,6 +885,10 @@ func Delete(ctx *context.APIContext) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ctx.Repo.GitRepo != nil {
|
||||||
|
ctx.Repo.GitRepo.Close()
|
||||||
|
}
|
||||||
|
|
||||||
if err := repo_service.DeleteRepository(ctx.User, repo); err != nil {
|
if err := repo_service.DeleteRepository(ctx.User, repo); err != nil {
|
||||||
ctx.Error(http.StatusInternalServerError, "DeleteRepository", err)
|
ctx.Error(http.StatusInternalServerError, "DeleteRepository", err)
|
||||||
return
|
return
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ func GetUserByParamsName(ctx *context.APIContext, name string) *models.User {
|
|||||||
user, err := models.GetUserByName(username)
|
user, err := models.GetUserByName(username)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if models.IsErrUserNotExist(err) {
|
if models.IsErrUserNotExist(err) {
|
||||||
if redirectUserID, err := models.LookupUserRedirect(username); err == nil {
|
if redirectUserID, err2 := models.LookupUserRedirect(username); err2 == nil {
|
||||||
context.RedirectToUser(ctx.Context, username, redirectUserID)
|
context.RedirectToUser(ctx.Context, username, redirectUserID)
|
||||||
} else {
|
} else {
|
||||||
ctx.NotFound("GetUserByName", err)
|
ctx.NotFound("GetUserByName", err)
|
||||||
|
|||||||
@@ -55,7 +55,7 @@ func parseTime(value string) (int64, error) {
|
|||||||
// prepareQueryArg unescape and trim a query arg
|
// prepareQueryArg unescape and trim a query arg
|
||||||
func prepareQueryArg(ctx *context.APIContext, name string) (value string, err error) {
|
func prepareQueryArg(ctx *context.APIContext, name string) (value string, err error) {
|
||||||
value, err = url.PathUnescape(ctx.Query(name))
|
value, err = url.PathUnescape(ctx.Query(name))
|
||||||
value = strings.Trim(value, " ")
|
value = strings.TrimSpace(value)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -22,6 +22,7 @@ import (
|
|||||||
"code.gitea.io/gitea/modules/log"
|
"code.gitea.io/gitea/modules/log"
|
||||||
"code.gitea.io/gitea/modules/setting"
|
"code.gitea.io/gitea/modules/setting"
|
||||||
"code.gitea.io/gitea/modules/templates"
|
"code.gitea.io/gitea/modules/templates"
|
||||||
|
"code.gitea.io/gitea/modules/translation"
|
||||||
"code.gitea.io/gitea/modules/user"
|
"code.gitea.io/gitea/modules/user"
|
||||||
"code.gitea.io/gitea/modules/util"
|
"code.gitea.io/gitea/modules/util"
|
||||||
"code.gitea.io/gitea/modules/web"
|
"code.gitea.io/gitea/modules/web"
|
||||||
@@ -61,6 +62,8 @@ func InstallInit(next http.Handler) http.Handler {
|
|||||||
"DbOptions": setting.SupportedDatabases,
|
"DbOptions": setting.SupportedDatabases,
|
||||||
"i18n": locale,
|
"i18n": locale,
|
||||||
"Language": locale.Language(),
|
"Language": locale.Language(),
|
||||||
|
"Lang": locale.Language(),
|
||||||
|
"AllLangs": translation.AllLangs(),
|
||||||
"CurrentURL": setting.AppSubURL + req.URL.RequestURI(),
|
"CurrentURL": setting.AppSubURL + req.URL.RequestURI(),
|
||||||
"PageStartTime": startTime,
|
"PageStartTime": startTime,
|
||||||
"TmplLoadTimes": func() string {
|
"TmplLoadTimes": func() string {
|
||||||
@@ -69,6 +72,12 @@ func InstallInit(next http.Handler) http.Handler {
|
|||||||
"PasswordHashAlgorithms": models.AvailableHashAlgorithms,
|
"PasswordHashAlgorithms": models.AvailableHashAlgorithms,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
for _, lang := range translation.AllLangs() {
|
||||||
|
if lang.Lang == locale.Language() {
|
||||||
|
ctx.Data["LangName"] = lang.Name
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
ctx.Req = context.WithContext(req, &ctx)
|
ctx.Req = context.WithContext(req, &ctx)
|
||||||
next.ServeHTTP(resp, ctx.Req)
|
next.ServeHTTP(resp, ctx.Req)
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -51,6 +51,7 @@ func SettingsPost(ctx *context.Context) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
org := ctx.Org.Organization
|
org := ctx.Org.Organization
|
||||||
|
nameChanged := org.Name != form.Name
|
||||||
|
|
||||||
// Check if organization name has been changed.
|
// Check if organization name has been changed.
|
||||||
if org.LowerName != strings.ToLower(form.Name) {
|
if org.LowerName != strings.ToLower(form.Name) {
|
||||||
@@ -74,7 +75,9 @@ func SettingsPost(ctx *context.Context) {
|
|||||||
// reset ctx.org.OrgLink with new name
|
// reset ctx.org.OrgLink with new name
|
||||||
ctx.Org.OrgLink = setting.AppSubURL + "/org/" + form.Name
|
ctx.Org.OrgLink = setting.AppSubURL + "/org/" + form.Name
|
||||||
log.Trace("Organization name changed: %s -> %s", org.Name, form.Name)
|
log.Trace("Organization name changed: %s -> %s", org.Name, form.Name)
|
||||||
|
nameChanged = false
|
||||||
}
|
}
|
||||||
|
|
||||||
// In case it's just a case change.
|
// In case it's just a case change.
|
||||||
org.Name = form.Name
|
org.Name = form.Name
|
||||||
org.LowerName = strings.ToLower(form.Name)
|
org.LowerName = strings.ToLower(form.Name)
|
||||||
@@ -104,11 +107,17 @@ func SettingsPost(ctx *context.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
for _, repo := range org.Repos {
|
for _, repo := range org.Repos {
|
||||||
|
repo.OwnerName = org.Name
|
||||||
if err := models.UpdateRepository(repo, true); err != nil {
|
if err := models.UpdateRepository(repo, true); err != nil {
|
||||||
ctx.ServerError("UpdateRepository", err)
|
ctx.ServerError("UpdateRepository", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
} else if nameChanged {
|
||||||
|
if err := models.UpdateRepositoryOwnerNames(org.ID, org.Name); err != nil {
|
||||||
|
ctx.ServerError("UpdateRepository", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Trace("Organization setting updated: %s", org.Name)
|
log.Trace("Organization setting updated: %s", org.Name)
|
||||||
|
|||||||
@@ -69,6 +69,7 @@ func Routes() *web.Route {
|
|||||||
r.Post("/manager/add-logger", bind(private.LoggerOptions{}), AddLogger)
|
r.Post("/manager/add-logger", bind(private.LoggerOptions{}), AddLogger)
|
||||||
r.Post("/manager/remove-logger/{group}/{name}", RemoveLogger)
|
r.Post("/manager/remove-logger/{group}/{name}", RemoveLogger)
|
||||||
r.Post("/mail/send", SendEmail)
|
r.Post("/mail/send", SendEmail)
|
||||||
|
r.Post("/restore_repo", RestoreRepo)
|
||||||
|
|
||||||
return r
|
return r
|
||||||
}
|
}
|
||||||
|
|||||||
51
routers/private/restore_repo.go
Normal file
51
routers/private/restore_repo.go
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
// Copyright 2021 The Gitea Authors. All rights reserved.
|
||||||
|
// Use of this source code is governed by a MIT-style
|
||||||
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
package private
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io/ioutil"
|
||||||
|
|
||||||
|
myCtx "code.gitea.io/gitea/modules/context"
|
||||||
|
"code.gitea.io/gitea/modules/migrations"
|
||||||
|
jsoniter "github.com/json-iterator/go"
|
||||||
|
)
|
||||||
|
|
||||||
|
// RestoreRepo restore a repository from data
|
||||||
|
func RestoreRepo(ctx *myCtx.PrivateContext) {
|
||||||
|
json := jsoniter.ConfigCompatibleWithStandardLibrary
|
||||||
|
bs, err := ioutil.ReadAll(ctx.Req.Body)
|
||||||
|
if err != nil {
|
||||||
|
ctx.JSON(500, map[string]string{
|
||||||
|
"err": err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
var params = struct {
|
||||||
|
RepoDir string
|
||||||
|
OwnerName string
|
||||||
|
RepoName string
|
||||||
|
Units []string
|
||||||
|
}{}
|
||||||
|
if err = json.Unmarshal(bs, ¶ms); err != nil {
|
||||||
|
ctx.JSON(500, map[string]string{
|
||||||
|
"err": err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := migrations.RestoreRepository(
|
||||||
|
ctx.Req.Context(),
|
||||||
|
params.RepoDir,
|
||||||
|
params.OwnerName,
|
||||||
|
params.RepoName,
|
||||||
|
params.Units,
|
||||||
|
); err != nil {
|
||||||
|
ctx.JSON(500, map[string]string{
|
||||||
|
"err": err.Error(),
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
ctx.Status(200)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -6,6 +6,7 @@
|
|||||||
package repo
|
package repo
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
@@ -82,34 +83,23 @@ func Branches(ctx *context.Context) {
|
|||||||
func DeleteBranchPost(ctx *context.Context) {
|
func DeleteBranchPost(ctx *context.Context) {
|
||||||
defer redirect(ctx)
|
defer redirect(ctx)
|
||||||
branchName := ctx.Query("name")
|
branchName := ctx.Query("name")
|
||||||
if branchName == ctx.Repo.Repository.DefaultBranch {
|
|
||||||
log.Debug("DeleteBranch: Can't delete default branch '%s'", branchName)
|
|
||||||
ctx.Flash.Error(ctx.Tr("repo.branch.default_deletion_failed", branchName))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
isProtected, err := ctx.Repo.Repository.IsProtectedBranch(branchName, ctx.User)
|
if err := repo_service.DeleteBranch(ctx.User, ctx.Repo.Repository, ctx.Repo.GitRepo, branchName); err != nil {
|
||||||
if err != nil {
|
switch {
|
||||||
log.Error("DeleteBranch: %v", err)
|
case git.IsErrBranchNotExist(err):
|
||||||
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", branchName))
|
log.Debug("DeleteBranch: Can't delete non existing branch '%s'", branchName)
|
||||||
return
|
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", branchName))
|
||||||
}
|
case errors.Is(err, repo_service.ErrBranchIsDefault):
|
||||||
|
log.Debug("DeleteBranch: Can't delete default branch '%s'", branchName)
|
||||||
|
ctx.Flash.Error(ctx.Tr("repo.branch.default_deletion_failed", branchName))
|
||||||
|
case errors.Is(err, repo_service.ErrBranchIsProtected):
|
||||||
|
log.Debug("DeleteBranch: Can't delete protected branch '%s'", branchName)
|
||||||
|
ctx.Flash.Error(ctx.Tr("repo.branch.protected_deletion_failed", branchName))
|
||||||
|
default:
|
||||||
|
log.Error("DeleteBranch: %v", err)
|
||||||
|
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", branchName))
|
||||||
|
}
|
||||||
|
|
||||||
if isProtected {
|
|
||||||
log.Debug("DeleteBranch: Can't delete protected branch '%s'", branchName)
|
|
||||||
ctx.Flash.Error(ctx.Tr("repo.branch.protected_deletion_failed", branchName))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if !ctx.Repo.GitRepo.IsBranchExist(branchName) {
|
|
||||||
log.Debug("DeleteBranch: Can't delete non existing branch '%s'", branchName)
|
|
||||||
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", branchName))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := deleteBranch(ctx, branchName); err != nil {
|
|
||||||
log.Error("DeleteBranch: %v", err)
|
|
||||||
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", branchName))
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -168,41 +158,6 @@ func redirect(ctx *context.Context) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func deleteBranch(ctx *context.Context, branchName string) error {
|
|
||||||
commit, err := ctx.Repo.GitRepo.GetBranchCommit(branchName)
|
|
||||||
if err != nil {
|
|
||||||
log.Error("GetBranchCommit: %v", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := ctx.Repo.GitRepo.DeleteBranch(branchName, git.DeleteBranchOptions{
|
|
||||||
Force: true,
|
|
||||||
}); err != nil {
|
|
||||||
log.Error("DeleteBranch: %v", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Don't return error below this
|
|
||||||
if err := repo_service.PushUpdate(
|
|
||||||
&repo_module.PushUpdateOptions{
|
|
||||||
RefFullName: git.BranchPrefix + branchName,
|
|
||||||
OldCommitID: commit.ID.String(),
|
|
||||||
NewCommitID: git.EmptySHA,
|
|
||||||
PusherID: ctx.User.ID,
|
|
||||||
PusherName: ctx.User.Name,
|
|
||||||
RepoUserName: ctx.Repo.Owner.Name,
|
|
||||||
RepoName: ctx.Repo.Repository.Name,
|
|
||||||
}); err != nil {
|
|
||||||
log.Error("Update: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := ctx.Repo.Repository.AddDeletedBranch(branchName, commit.ID.String(), ctx.User.ID); err != nil {
|
|
||||||
log.Warn("AddDeletedBranch: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// loadBranches loads branches from the repository limited by page & pageSize.
|
// loadBranches loads branches from the repository limited by page & pageSize.
|
||||||
// NOTE: May write to context on error.
|
// NOTE: May write to context on error.
|
||||||
func loadBranches(ctx *context.Context, skip, limit int) ([]*Branch, int) {
|
func loadBranches(ctx *context.Context, skip, limit int) ([]*Branch, int) {
|
||||||
|
|||||||
@@ -447,7 +447,26 @@ func (h *serviceHandler) setHeaderCacheForever() {
|
|||||||
h.w.Header().Set("Cache-Control", "public, max-age=31536000")
|
h.w.Header().Set("Cache-Control", "public, max-age=31536000")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func containsParentDirectorySeparator(v string) bool {
|
||||||
|
if !strings.Contains(v, "..") {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for _, ent := range strings.FieldsFunc(v, isSlashRune) {
|
||||||
|
if ent == ".." {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func isSlashRune(r rune) bool { return r == '/' || r == '\\' }
|
||||||
|
|
||||||
func (h *serviceHandler) sendFile(contentType, file string) {
|
func (h *serviceHandler) sendFile(contentType, file string) {
|
||||||
|
if containsParentDirectorySeparator(file) {
|
||||||
|
log.Error("request file path contains invalid path: %v", file)
|
||||||
|
h.w.WriteHeader(http.StatusBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
reqFile := path.Join(h.dir, file)
|
reqFile := path.Join(h.dir, file)
|
||||||
|
|
||||||
fi, err := os.Stat(reqFile)
|
fi, err := os.Stat(reqFile)
|
||||||
|
|||||||
43
routers/repo/http_test.go
Normal file
43
routers/repo/http_test.go
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
// Copyright 2021 The Gitea Authors. All rights reserved.
|
||||||
|
// Use of this source code is governed by a MIT-style
|
||||||
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
package repo
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestContainsParentDirectorySeparator(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
v string
|
||||||
|
b bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
v: `user2/repo1/info/refs`,
|
||||||
|
b: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
v: `user2/repo1/HEAD`,
|
||||||
|
b: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
v: `user2/repo1/some.../strange_file...mp3`,
|
||||||
|
b: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
v: `user2/repo1/../../custom/conf/app.ini`,
|
||||||
|
b: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
v: `user2/repo1/objects/info/..\..\..\..\custom\conf\app.ini`,
|
||||||
|
b: true,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range tests {
|
||||||
|
assert.EqualValues(t, tests[i].b, containsParentDirectorySeparator(tests[i].v))
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -9,6 +9,7 @@ package repo
|
|||||||
import (
|
import (
|
||||||
"container/list"
|
"container/list"
|
||||||
"crypto/subtle"
|
"crypto/subtle"
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
"path"
|
"path"
|
||||||
@@ -22,7 +23,6 @@ import (
|
|||||||
"code.gitea.io/gitea/modules/git"
|
"code.gitea.io/gitea/modules/git"
|
||||||
"code.gitea.io/gitea/modules/log"
|
"code.gitea.io/gitea/modules/log"
|
||||||
"code.gitea.io/gitea/modules/notification"
|
"code.gitea.io/gitea/modules/notification"
|
||||||
repo_module "code.gitea.io/gitea/modules/repository"
|
|
||||||
"code.gitea.io/gitea/modules/setting"
|
"code.gitea.io/gitea/modules/setting"
|
||||||
"code.gitea.io/gitea/modules/structs"
|
"code.gitea.io/gitea/modules/structs"
|
||||||
"code.gitea.io/gitea/modules/upload"
|
"code.gitea.io/gitea/modules/upload"
|
||||||
@@ -1186,20 +1186,6 @@ func CleanUpPullRequest(ctx *context.Context) {
|
|||||||
})
|
})
|
||||||
}()
|
}()
|
||||||
|
|
||||||
if pr.HeadBranch == pr.HeadRepo.DefaultBranch || !gitRepo.IsBranchExist(pr.HeadBranch) {
|
|
||||||
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if branch is not protected
|
|
||||||
if protected, err := pr.HeadRepo.IsProtectedBranch(pr.HeadBranch, ctx.User); err != nil || protected {
|
|
||||||
if err != nil {
|
|
||||||
log.Error("HeadRepo.IsProtectedBranch: %v", err)
|
|
||||||
}
|
|
||||||
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if branch has no new commits
|
// Check if branch has no new commits
|
||||||
headCommitID, err := gitBaseRepo.GetRefCommitID(pr.GetGitRefName())
|
headCommitID, err := gitBaseRepo.GetRefCommitID(pr.GetGitRefName())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -1218,27 +1204,21 @@ func CleanUpPullRequest(ctx *context.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := gitRepo.DeleteBranch(pr.HeadBranch, git.DeleteBranchOptions{
|
if err := repo_service.DeleteBranch(ctx.User, pr.HeadRepo, gitRepo, pr.HeadBranch); err != nil {
|
||||||
Force: true,
|
switch {
|
||||||
}); err != nil {
|
case git.IsErrBranchNotExist(err):
|
||||||
log.Error("DeleteBranch: %v", err)
|
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
|
||||||
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
|
case errors.Is(err, repo_service.ErrBranchIsDefault):
|
||||||
|
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
|
||||||
|
case errors.Is(err, repo_service.ErrBranchIsProtected):
|
||||||
|
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
|
||||||
|
default:
|
||||||
|
log.Error("DeleteBranch: %v", err)
|
||||||
|
ctx.Flash.Error(ctx.Tr("repo.branch.deletion_failed", fullBranchName))
|
||||||
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := repo_service.PushUpdate(
|
|
||||||
&repo_module.PushUpdateOptions{
|
|
||||||
RefFullName: git.BranchPrefix + pr.HeadBranch,
|
|
||||||
OldCommitID: branchCommitID,
|
|
||||||
NewCommitID: git.EmptySHA,
|
|
||||||
PusherID: ctx.User.ID,
|
|
||||||
PusherName: ctx.User.Name,
|
|
||||||
RepoUserName: pr.HeadRepo.Owner.Name,
|
|
||||||
RepoName: pr.HeadRepo.Name,
|
|
||||||
}); err != nil {
|
|
||||||
log.Error("Update: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := models.AddDeletePRBranchComment(ctx.User, pr.BaseRepo, issue.ID, pr.HeadBranch); err != nil {
|
if err := models.AddDeletePRBranchComment(ctx.User, pr.BaseRepo, issue.ID, pr.HeadBranch); err != nil {
|
||||||
// Do not fail here as branch has already been deleted
|
// Do not fail here as branch has already been deleted
|
||||||
log.Error("DeleteBranch: %v", err)
|
log.Error("DeleteBranch: %v", err)
|
||||||
|
|||||||
@@ -539,6 +539,11 @@ func SettingsPost(ctx *context.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Close the gitrepository before doing this.
|
||||||
|
if ctx.Repo.GitRepo != nil {
|
||||||
|
ctx.Repo.GitRepo.Close()
|
||||||
|
}
|
||||||
|
|
||||||
if err := repo_service.DeleteRepository(ctx.User, ctx.Repo.Repository); err != nil {
|
if err := repo_service.DeleteRepository(ctx.User, ctx.Repo.Repository); err != nil {
|
||||||
ctx.ServerError("DeleteRepository", err)
|
ctx.ServerError("DeleteRepository", err)
|
||||||
return
|
return
|
||||||
|
|||||||
86
routers/routes/goget.go
Normal file
86
routers/routes/goget.go
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
// Copyright 2021 The Gitea Authors. All rights reserved.
|
||||||
|
// Use of this source code is governed by a MIT-style
|
||||||
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
package routes
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
"path"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"code.gitea.io/gitea/models"
|
||||||
|
"code.gitea.io/gitea/modules/context"
|
||||||
|
"code.gitea.io/gitea/modules/setting"
|
||||||
|
"code.gitea.io/gitea/modules/util"
|
||||||
|
"github.com/unknwon/com"
|
||||||
|
)
|
||||||
|
|
||||||
|
func goGet(ctx *context.Context) {
|
||||||
|
if ctx.Req.Method != "GET" || ctx.Query("go-get") != "1" || len(ctx.Req.URL.Query()) > 1 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
parts := strings.SplitN(ctx.Req.URL.EscapedPath(), "/", 4)
|
||||||
|
|
||||||
|
if len(parts) < 3 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ownerName := parts[1]
|
||||||
|
repoName := parts[2]
|
||||||
|
|
||||||
|
// Quick responses appropriate go-get meta with status 200
|
||||||
|
// regardless of if user have access to the repository,
|
||||||
|
// or the repository does not exist at all.
|
||||||
|
// This is particular a workaround for "go get" command which does not respect
|
||||||
|
// .netrc file.
|
||||||
|
|
||||||
|
trimmedRepoName := strings.TrimSuffix(repoName, ".git")
|
||||||
|
|
||||||
|
if ownerName == "" || trimmedRepoName == "" {
|
||||||
|
_, _ = ctx.Write([]byte(`<!doctype html>
|
||||||
|
<html>
|
||||||
|
<body>
|
||||||
|
invalid import path
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
`))
|
||||||
|
ctx.Status(400)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
branchName := setting.Repository.DefaultBranch
|
||||||
|
|
||||||
|
repo, err := models.GetRepositoryByOwnerAndName(ownerName, repoName)
|
||||||
|
if err == nil && len(repo.DefaultBranch) > 0 {
|
||||||
|
branchName = repo.DefaultBranch
|
||||||
|
}
|
||||||
|
prefix := setting.AppURL + path.Join(url.PathEscape(ownerName), url.PathEscape(repoName), "src", "branch", util.PathEscapeSegments(branchName))
|
||||||
|
|
||||||
|
appURL, _ := url.Parse(setting.AppURL)
|
||||||
|
|
||||||
|
insecure := ""
|
||||||
|
if appURL.Scheme == string(setting.HTTP) {
|
||||||
|
insecure = "--insecure "
|
||||||
|
}
|
||||||
|
ctx.Header().Set("Content-Type", "text/html")
|
||||||
|
ctx.Status(http.StatusOK)
|
||||||
|
_, _ = ctx.Write([]byte(com.Expand(`<!doctype html>
|
||||||
|
<html>
|
||||||
|
<head>
|
||||||
|
<meta name="go-import" content="{GoGetImport} git {CloneLink}">
|
||||||
|
<meta name="go-source" content="{GoGetImport} _ {GoDocDirectory} {GoDocFile}">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
go get {Insecure}{GoGetImport}
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
`, map[string]string{
|
||||||
|
"GoGetImport": context.ComposeGoGetImport(ownerName, trimmedRepoName),
|
||||||
|
"CloneLink": models.ComposeHTTPSCloneURL(ownerName, repoName),
|
||||||
|
"GoDocDirectory": prefix + "{/dir}",
|
||||||
|
"GoDocFile": prefix + "{/dir}/{file}#L{line}",
|
||||||
|
"Insecure": insecure,
|
||||||
|
})))
|
||||||
|
}
|
||||||
@@ -89,6 +89,7 @@ func InstallRoutes() *web.Route {
|
|||||||
Gclifetime: setting.SessionConfig.Gclifetime,
|
Gclifetime: setting.SessionConfig.Gclifetime,
|
||||||
Maxlifetime: setting.SessionConfig.Maxlifetime,
|
Maxlifetime: setting.SessionConfig.Maxlifetime,
|
||||||
Secure: setting.SessionConfig.Secure,
|
Secure: setting.SessionConfig.Secure,
|
||||||
|
SameSite: setting.SessionConfig.SameSite,
|
||||||
Domain: setting.SessionConfig.Domain,
|
Domain: setting.SessionConfig.Domain,
|
||||||
}))
|
}))
|
||||||
|
|
||||||
@@ -110,7 +111,7 @@ func InstallRoutes() *web.Route {
|
|||||||
r.Get("/", routers.Install)
|
r.Get("/", routers.Install)
|
||||||
r.Post("/", web.Bind(forms.InstallForm{}), routers.InstallPost)
|
r.Post("/", web.Bind(forms.InstallForm{}), routers.InstallPost)
|
||||||
r.NotFound(func(w http.ResponseWriter, req *http.Request) {
|
r.NotFound(func(w http.ResponseWriter, req *http.Request) {
|
||||||
http.Redirect(w, req, setting.AppURL, 302)
|
http.Redirect(w, req, setting.AppURL, http.StatusFound)
|
||||||
})
|
})
|
||||||
return r
|
return r
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,7 +8,6 @@ import (
|
|||||||
"encoding/gob"
|
"encoding/gob"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/url"
|
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
"strings"
|
"strings"
|
||||||
@@ -24,7 +23,6 @@ import (
|
|||||||
"code.gitea.io/gitea/modules/setting"
|
"code.gitea.io/gitea/modules/setting"
|
||||||
"code.gitea.io/gitea/modules/storage"
|
"code.gitea.io/gitea/modules/storage"
|
||||||
"code.gitea.io/gitea/modules/templates"
|
"code.gitea.io/gitea/modules/templates"
|
||||||
"code.gitea.io/gitea/modules/util"
|
|
||||||
"code.gitea.io/gitea/modules/validation"
|
"code.gitea.io/gitea/modules/validation"
|
||||||
"code.gitea.io/gitea/modules/web"
|
"code.gitea.io/gitea/modules/web"
|
||||||
"code.gitea.io/gitea/routers"
|
"code.gitea.io/gitea/routers"
|
||||||
@@ -51,7 +49,6 @@ import (
|
|||||||
"github.com/go-chi/cors"
|
"github.com/go-chi/cors"
|
||||||
"github.com/prometheus/client_golang/prometheus"
|
"github.com/prometheus/client_golang/prometheus"
|
||||||
"github.com/tstranex/u2f"
|
"github.com/tstranex/u2f"
|
||||||
"github.com/unknwon/com"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -138,6 +135,7 @@ func WebRoutes() *web.Route {
|
|||||||
Gclifetime: setting.SessionConfig.Gclifetime,
|
Gclifetime: setting.SessionConfig.Gclifetime,
|
||||||
Maxlifetime: setting.SessionConfig.Maxlifetime,
|
Maxlifetime: setting.SessionConfig.Maxlifetime,
|
||||||
Secure: setting.SessionConfig.Secure,
|
Secure: setting.SessionConfig.Secure,
|
||||||
|
SameSite: setting.SessionConfig.SameSite,
|
||||||
Domain: setting.SessionConfig.Domain,
|
Domain: setting.SessionConfig.Domain,
|
||||||
}))
|
}))
|
||||||
|
|
||||||
@@ -190,6 +188,7 @@ func WebRoutes() *web.Route {
|
|||||||
ctx.Data["UnitPullsGlobalDisabled"] = models.UnitTypePullRequests.UnitGlobalDisabled()
|
ctx.Data["UnitPullsGlobalDisabled"] = models.UnitTypePullRequests.UnitGlobalDisabled()
|
||||||
ctx.Data["UnitProjectsGlobalDisabled"] = models.UnitTypeProjects.UnitGlobalDisabled()
|
ctx.Data["UnitProjectsGlobalDisabled"] = models.UnitTypeProjects.UnitGlobalDisabled()
|
||||||
})
|
})
|
||||||
|
r.Use(goGet)
|
||||||
|
|
||||||
// for health check
|
// for health check
|
||||||
r.Head("/", func(w http.ResponseWriter, req *http.Request) {
|
r.Head("/", func(w http.ResponseWriter, req *http.Request) {
|
||||||
@@ -229,67 +228,6 @@ func WebRoutes() *web.Route {
|
|||||||
return r
|
return r
|
||||||
}
|
}
|
||||||
|
|
||||||
func goGet(ctx *context.Context) {
|
|
||||||
if ctx.Query("go-get") != "1" {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Quick responses appropriate go-get meta with status 200
|
|
||||||
// regardless of if user have access to the repository,
|
|
||||||
// or the repository does not exist at all.
|
|
||||||
// This is particular a workaround for "go get" command which does not respect
|
|
||||||
// .netrc file.
|
|
||||||
|
|
||||||
ownerName := ctx.Params(":username")
|
|
||||||
repoName := ctx.Params(":reponame")
|
|
||||||
trimmedRepoName := strings.TrimSuffix(repoName, ".git")
|
|
||||||
|
|
||||||
if ownerName == "" || trimmedRepoName == "" {
|
|
||||||
_, _ = ctx.Write([]byte(`<!doctype html>
|
|
||||||
<html>
|
|
||||||
<body>
|
|
||||||
invalid import path
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
`))
|
|
||||||
ctx.Status(400)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
branchName := setting.Repository.DefaultBranch
|
|
||||||
|
|
||||||
repo, err := models.GetRepositoryByOwnerAndName(ownerName, repoName)
|
|
||||||
if err == nil && len(repo.DefaultBranch) > 0 {
|
|
||||||
branchName = repo.DefaultBranch
|
|
||||||
}
|
|
||||||
prefix := setting.AppURL + path.Join(url.PathEscape(ownerName), url.PathEscape(repoName), "src", "branch", util.PathEscapeSegments(branchName))
|
|
||||||
|
|
||||||
appURL, _ := url.Parse(setting.AppURL)
|
|
||||||
|
|
||||||
insecure := ""
|
|
||||||
if appURL.Scheme == string(setting.HTTP) {
|
|
||||||
insecure = "--insecure "
|
|
||||||
}
|
|
||||||
ctx.Header().Set("Content-Type", "text/html")
|
|
||||||
ctx.Status(http.StatusOK)
|
|
||||||
_, _ = ctx.Write([]byte(com.Expand(`<!doctype html>
|
|
||||||
<html>
|
|
||||||
<head>
|
|
||||||
<meta name="go-import" content="{GoGetImport} git {CloneLink}">
|
|
||||||
<meta name="go-source" content="{GoGetImport} _ {GoDocDirectory} {GoDocFile}">
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
go get {Insecure}{GoGetImport}
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
`, map[string]string{
|
|
||||||
"GoGetImport": context.ComposeGoGetImport(ownerName, trimmedRepoName),
|
|
||||||
"CloneLink": models.ComposeHTTPSCloneURL(ownerName, repoName),
|
|
||||||
"GoDocDirectory": prefix + "{/dir}",
|
|
||||||
"GoDocFile": prefix + "{/dir}/{file}#L{line}",
|
|
||||||
"Insecure": insecure,
|
|
||||||
})))
|
|
||||||
}
|
|
||||||
|
|
||||||
// RegisterRoutes register routes
|
// RegisterRoutes register routes
|
||||||
func RegisterRoutes(m *web.Route) {
|
func RegisterRoutes(m *web.Route) {
|
||||||
reqSignIn := context.Toggle(&context.ToggleOptions{SignInRequired: true})
|
reqSignIn := context.Toggle(&context.ToggleOptions{SignInRequired: true})
|
||||||
@@ -1091,7 +1029,7 @@ func RegisterRoutes(m *web.Route) {
|
|||||||
m.Group("/{username}", func() {
|
m.Group("/{username}", func() {
|
||||||
m.Group("/{reponame}", func() {
|
m.Group("/{reponame}", func() {
|
||||||
m.Get("", repo.SetEditorconfigIfExists, repo.Home)
|
m.Get("", repo.SetEditorconfigIfExists, repo.Home)
|
||||||
}, goGet, ignSignIn, context.RepoAssignment, context.RepoRef(), context.UnitTypes())
|
}, ignSignIn, context.RepoAssignment, context.RepoRef(), context.UnitTypes())
|
||||||
|
|
||||||
m.Group("/{reponame}", func() {
|
m.Group("/{reponame}", func() {
|
||||||
m.Group("/info/lfs", func() {
|
m.Group("/info/lfs", func() {
|
||||||
|
|||||||
@@ -67,8 +67,13 @@ func HandleUsernameChange(ctx *context.Context, user *models.User, newName strin
|
|||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
log.Trace("User name changed: %s -> %s", user.Name, newName)
|
} else {
|
||||||
|
if err := models.UpdateRepositoryOwnerNames(user.ID, newName); err != nil {
|
||||||
|
ctx.ServerError("UpdateRepository", err)
|
||||||
|
return err
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
log.Trace("User name changed: %s -> %s", user.Name, newName)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -84,6 +89,7 @@ func ProfilePost(ctx *context.Context) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if len(form.Name) != 0 && ctx.User.Name != form.Name {
|
if len(form.Name) != 0 && ctx.User.Name != form.Name {
|
||||||
|
log.Debug("Changing name for %s to %s", ctx.User.Name, form.Name)
|
||||||
if err := HandleUsernameChange(ctx, ctx.User, form.Name); err != nil {
|
if err := HandleUsernameChange(ctx, ctx.User, form.Name); err != nil {
|
||||||
ctx.Redirect(setting.AppSubURL + "/user/settings")
|
ctx.Redirect(setting.AppSubURL + "/user/settings")
|
||||||
return
|
return
|
||||||
|
|||||||
@@ -20,12 +20,16 @@ func mailParticipantsComment(c *models.Comment, opType models.ActionType, issue
|
|||||||
for i, u := range mentions {
|
for i, u := range mentions {
|
||||||
mentionedIDs[i] = u.ID
|
mentionedIDs[i] = u.ID
|
||||||
}
|
}
|
||||||
|
content := c.Content
|
||||||
|
if c.Type == models.CommentTypePullPush {
|
||||||
|
content = ""
|
||||||
|
}
|
||||||
if err = mailIssueCommentToParticipants(
|
if err = mailIssueCommentToParticipants(
|
||||||
&mailCommentContext{
|
&mailCommentContext{
|
||||||
Issue: issue,
|
Issue: issue,
|
||||||
Doer: c.Poster,
|
Doer: c.Poster,
|
||||||
ActionType: opType,
|
ActionType: opType,
|
||||||
Content: c.Content,
|
Content: content,
|
||||||
Comment: c,
|
Comment: c,
|
||||||
}, mentionedIDs); err != nil {
|
}, mentionedIDs); err != nil {
|
||||||
log.Error("mailIssueCommentToParticipants: %v", err)
|
log.Error("mailIssueCommentToParticipants: %v", err)
|
||||||
|
|||||||
@@ -158,12 +158,18 @@ func mailParticipants(issue *models.Issue, doer *models.User, opType models.Acti
|
|||||||
for i, u := range mentions {
|
for i, u := range mentions {
|
||||||
mentionedIDs[i] = u.ID
|
mentionedIDs[i] = u.ID
|
||||||
}
|
}
|
||||||
|
content := issue.Content
|
||||||
|
if opType == models.ActionCloseIssue || opType == models.ActionClosePullRequest ||
|
||||||
|
opType == models.ActionReopenIssue || opType == models.ActionReopenPullRequest ||
|
||||||
|
opType == models.ActionMergePullRequest {
|
||||||
|
content = ""
|
||||||
|
}
|
||||||
if err = mailIssueCommentToParticipants(
|
if err = mailIssueCommentToParticipants(
|
||||||
&mailCommentContext{
|
&mailCommentContext{
|
||||||
Issue: issue,
|
Issue: issue,
|
||||||
Doer: doer,
|
Doer: doer,
|
||||||
ActionType: opType,
|
ActionType: opType,
|
||||||
Content: issue.Content,
|
Content: content,
|
||||||
Comment: nil,
|
Comment: nil,
|
||||||
}, mentionedIDs); err != nil {
|
}, mentionedIDs); err != nil {
|
||||||
log.Error("mailIssueCommentToParticipants: %v", err)
|
log.Error("mailIssueCommentToParticipants: %v", err)
|
||||||
|
|||||||
72
services/repository/branch.go
Normal file
72
services/repository/branch.go
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
// Copyright 2021 The Gitea Authors. All rights reserved.
|
||||||
|
// Use of this source code is governed by a MIT-style
|
||||||
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
package repository
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
|
||||||
|
"code.gitea.io/gitea/models"
|
||||||
|
"code.gitea.io/gitea/modules/git"
|
||||||
|
"code.gitea.io/gitea/modules/log"
|
||||||
|
repo_module "code.gitea.io/gitea/modules/repository"
|
||||||
|
pull_service "code.gitea.io/gitea/services/pull"
|
||||||
|
)
|
||||||
|
|
||||||
|
// enmuerates all branch related errors
|
||||||
|
var (
|
||||||
|
ErrBranchIsDefault = errors.New("branch is default")
|
||||||
|
ErrBranchIsProtected = errors.New("branch is protected")
|
||||||
|
)
|
||||||
|
|
||||||
|
// DeleteBranch delete branch
|
||||||
|
func DeleteBranch(doer *models.User, repo *models.Repository, gitRepo *git.Repository, branchName string) error {
|
||||||
|
if branchName == repo.DefaultBranch {
|
||||||
|
return ErrBranchIsDefault
|
||||||
|
}
|
||||||
|
|
||||||
|
isProtected, err := repo.IsProtectedBranch(branchName, doer)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if isProtected {
|
||||||
|
return ErrBranchIsProtected
|
||||||
|
}
|
||||||
|
|
||||||
|
commit, err := gitRepo.GetBranchCommit(branchName)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := gitRepo.DeleteBranch(branchName, git.DeleteBranchOptions{
|
||||||
|
Force: true,
|
||||||
|
}); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := pull_service.CloseBranchPulls(doer, repo.ID, branchName); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Don't return error below this
|
||||||
|
if err := PushUpdate(
|
||||||
|
&repo_module.PushUpdateOptions{
|
||||||
|
RefFullName: git.BranchPrefix + branchName,
|
||||||
|
OldCommitID: commit.ID.String(),
|
||||||
|
NewCommitID: git.EmptySHA,
|
||||||
|
PusherID: doer.ID,
|
||||||
|
PusherName: doer.Name,
|
||||||
|
RepoUserName: repo.OwnerName,
|
||||||
|
RepoName: repo.Name,
|
||||||
|
}); err != nil {
|
||||||
|
log.Error("Update: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := repo.AddDeletedBranch(branchName, commit.ID.String(), doer.ID); err != nil {
|
||||||
|
log.Warn("AddDeletedBranch: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -193,16 +193,17 @@ func pushUpdates(optsList []*repo_module.PushUpdateOptions) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
commits = repo_module.ListToPushCommits(l)
|
commits = repo_module.ListToPushCommits(l)
|
||||||
|
|
||||||
|
if err := repofiles.UpdateIssuesCommit(pusher, repo, commits.Commits, refName); err != nil {
|
||||||
|
log.Error("updateIssuesCommit: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
if len(commits.Commits) > setting.UI.FeedMaxCommitNum {
|
if len(commits.Commits) > setting.UI.FeedMaxCommitNum {
|
||||||
commits.Commits = commits.Commits[:setting.UI.FeedMaxCommitNum]
|
commits.Commits = commits.Commits[:setting.UI.FeedMaxCommitNum]
|
||||||
}
|
}
|
||||||
commits.CompareURL = repo.ComposeCompareURL(opts.OldCommitID, opts.NewCommitID)
|
commits.CompareURL = repo.ComposeCompareURL(opts.OldCommitID, opts.NewCommitID)
|
||||||
notification.NotifyPushCommits(pusher, repo, opts, commits)
|
notification.NotifyPushCommits(pusher, repo, opts, commits)
|
||||||
|
|
||||||
if err := repofiles.UpdateIssuesCommit(pusher, repo, commits.Commits, refName); err != nil {
|
|
||||||
log.Error("updateIssuesCommit: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err = models.RemoveDeletedBranch(repo.ID, branch); err != nil {
|
if err = models.RemoveDeletedBranch(repo.ID, branch); err != nil {
|
||||||
log.Error("models.RemoveDeletedBranch %s/%s failed: %v", repo.ID, branch, err)
|
log.Error("models.RemoveDeletedBranch %s/%s failed: %v", repo.ID, branch, err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -25,14 +25,14 @@ environment:
|
|||||||
apps:
|
apps:
|
||||||
gitea:
|
gitea:
|
||||||
command: gitea
|
command: gitea
|
||||||
plugs: [network, network-bind]
|
plugs: [network, network-bind, removable-media]
|
||||||
web:
|
web:
|
||||||
command: gitea web
|
command: gitea web
|
||||||
daemon: simple
|
daemon: simple
|
||||||
plugs: [network, network-bind]
|
plugs: [network, network-bind, removable-media]
|
||||||
dump:
|
dump:
|
||||||
command: gitea dump
|
command: gitea dump
|
||||||
plugs: [home]
|
plugs: [home, removable-media]
|
||||||
version:
|
version:
|
||||||
command: gitea --version
|
command: gitea --version
|
||||||
sqlite:
|
sqlite:
|
||||||
|
|||||||
@@ -188,6 +188,10 @@
|
|||||||
<label for="pam_service_name">{{.i18n.Tr "admin.auths.pam_service_name"}}</label>
|
<label for="pam_service_name">{{.i18n.Tr "admin.auths.pam_service_name"}}</label>
|
||||||
<input id="pam_service_name" name="pam_service_name" value="{{$cfg.ServiceName}}" required>
|
<input id="pam_service_name" name="pam_service_name" value="{{$cfg.ServiceName}}" required>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="field">
|
||||||
|
<label for="pam_email_domain">{{.i18n.Tr "admin.auths.pam_email_domain"}}</label>
|
||||||
|
<input id="pam_email_domain" name="pam_email_domain" value="{{$cfg.EmailDomain}}">
|
||||||
|
</div>
|
||||||
{{end}}
|
{{end}}
|
||||||
|
|
||||||
<!-- OAuth2 -->
|
<!-- OAuth2 -->
|
||||||
|
|||||||
@@ -38,6 +38,8 @@
|
|||||||
<div class="pam required field {{if not (eq .type 4)}}hide{{end}}">
|
<div class="pam required field {{if not (eq .type 4)}}hide{{end}}">
|
||||||
<label for="pam_service_name">{{.i18n.Tr "admin.auths.pam_service_name"}}</label>
|
<label for="pam_service_name">{{.i18n.Tr "admin.auths.pam_service_name"}}</label>
|
||||||
<input id="pam_service_name" name="pam_service_name" value="{{.pam_service_name}}" />
|
<input id="pam_service_name" name="pam_service_name" value="{{.pam_service_name}}" />
|
||||||
|
<label for="pam_email_domain">{{.i18n.Tr "admin.auths.pam_email_domain"}}</label>
|
||||||
|
<input id="pam_email_domain" name="pam_email_domain" value="{{.pam_email_domain}}">
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- OAuth2 -->
|
<!-- OAuth2 -->
|
||||||
|
|||||||
@@ -19,7 +19,7 @@
|
|||||||
<div class="menu transition" :class="{visible: menuVisible}" v-if="menuVisible" v-cloak>
|
<div class="menu transition" :class="{visible: menuVisible}" v-if="menuVisible" v-cloak>
|
||||||
<div class="ui icon search input">
|
<div class="ui icon search input">
|
||||||
<i class="icon df ac jc m-0">{{svg "octicon-filter" 16}}</i>
|
<i class="icon df ac jc m-0">{{svg "octicon-filter" 16}}</i>
|
||||||
<input name="search" ref="searchField" v-model="searchTerm" @keydown="keydown($event)" placeholder="{{.i18n.Tr "repo.filter_branch_and_tag"}}...">
|
<input name="search" ref="searchField" autocomplete="off" v-model="searchTerm" @keydown="keydown($event)" placeholder="{{.i18n.Tr "repo.filter_branch_and_tag"}}...">
|
||||||
</div>
|
</div>
|
||||||
<div class="header branch-tag-choice">
|
<div class="header branch-tag-choice">
|
||||||
<div class="ui grid">
|
<div class="ui grid">
|
||||||
|
|||||||
@@ -2,14 +2,9 @@
|
|||||||
<div class="page-content repository">
|
<div class="page-content repository">
|
||||||
{{template "repo/header" .}}
|
{{template "repo/header" .}}
|
||||||
<div class="ui container">
|
<div class="ui container">
|
||||||
<div class="ui three column stackable grid">
|
<div class="ui two column stackable grid">
|
||||||
<div class="column">
|
<div class="column">
|
||||||
<h1>{{.Milestone.Name}}</h1>
|
<h1>{{.Milestone.Name}}</h1>
|
||||||
<div class="markdown content">
|
|
||||||
{{.Milestone.RenderedContent|Str2html}}
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
<div class="column center aligned">
|
|
||||||
</div>
|
</div>
|
||||||
{{if not .Repository.IsArchived}}
|
{{if not .Repository.IsArchived}}
|
||||||
<div class="column right aligned">
|
<div class="column right aligned">
|
||||||
@@ -20,6 +15,11 @@
|
|||||||
</div>
|
</div>
|
||||||
{{end}}
|
{{end}}
|
||||||
</div>
|
</div>
|
||||||
|
<div class="ui one column stackable grid">
|
||||||
|
<div class="column markup content">
|
||||||
|
{{.Milestone.RenderedContent|Str2html}}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
<div class="ui one column stackable grid">
|
<div class="ui one column stackable grid">
|
||||||
<div class="column">
|
<div class="column">
|
||||||
{{ $closedDate:= TimeSinceUnix .Milestone.ClosedDateUnix $.Lang }}
|
{{ $closedDate:= TimeSinceUnix .Milestone.ClosedDateUnix $.Lang }}
|
||||||
|
|||||||
@@ -6,9 +6,6 @@
|
|||||||
<div class="column">
|
<div class="column">
|
||||||
{{template "repo/issue/navbar" .}}
|
{{template "repo/issue/navbar" .}}
|
||||||
</div>
|
</div>
|
||||||
<div class="column center aligned">
|
|
||||||
{{template "repo/issue/search" .}}
|
|
||||||
</div>
|
|
||||||
<div class="column right aligned">
|
<div class="column right aligned">
|
||||||
{{if and .CanWriteProjects (not .Repository.IsArchived) .PageIsProjects}}
|
{{if and .CanWriteProjects (not .Repository.IsArchived) .PageIsProjects}}
|
||||||
<a class="ui green button show-modal item" data-modal="#new-board-item">{{.i18n.Tr "new_project_board"}}</a>
|
<a class="ui green button show-modal item" data-modal="#new-board-item">{{.i18n.Tr "new_project_board"}}</a>
|
||||||
|
|||||||
@@ -67,7 +67,7 @@
|
|||||||
</a>
|
</a>
|
||||||
{{end}}
|
{{end}}
|
||||||
{{if .Ref}}
|
{{if .Ref}}
|
||||||
<a class="ref" {{if $.RepoLink}}href="{{$.RepoLink}}{{index $.IssueRefURLs .ID}}"{{else}}href="{{AppSubUrl}}/{{.Repo.OwnerName}}/{{.Repo.Name}}{{index $.IssueRefURLs .ID}}"{{end}}>
|
<a class="ref" {{if $.RepoLink}}href="{{index $.IssueRefURLs .ID}}"{{else}}href="{{AppSubUrl}}/{{.Repo.OwnerName}}/{{.Repo.Name}}{{index $.IssueRefURLs .ID}}"{{end}}>
|
||||||
{{svg "octicon-git-branch" 14 "mr-2"}}{{index $.IssueRefEndNames .ID}}
|
{{svg "octicon-git-branch" 14 "mr-2"}}{{index $.IssueRefEndNames .ID}}
|
||||||
</a>
|
</a>
|
||||||
{{end}}
|
{{end}}
|
||||||
|
|||||||
27
vendor/github.com/unrolled/render/render.go
generated
vendored
27
vendor/github.com/unrolled/render/render.go
generated
vendored
@@ -123,7 +123,7 @@ type Render struct {
|
|||||||
// Customize Secure with an Options struct.
|
// Customize Secure with an Options struct.
|
||||||
opt Options
|
opt Options
|
||||||
templates *template.Template
|
templates *template.Template
|
||||||
templatesLk sync.Mutex
|
templatesLk sync.RWMutex
|
||||||
compiledCharset string
|
compiledCharset string
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -196,8 +196,8 @@ func (r *Render) compileTemplates() {
|
|||||||
|
|
||||||
func (r *Render) compileTemplatesFromDir() {
|
func (r *Render) compileTemplatesFromDir() {
|
||||||
dir := r.opt.Directory
|
dir := r.opt.Directory
|
||||||
r.templates = template.New(dir)
|
tmpTemplates := template.New(dir)
|
||||||
r.templates.Delims(r.opt.Delims.Left, r.opt.Delims.Right)
|
tmpTemplates.Delims(r.opt.Delims.Left, r.opt.Delims.Right)
|
||||||
|
|
||||||
// Walk the supplied directory and compile any files that match our extension list.
|
// Walk the supplied directory and compile any files that match our extension list.
|
||||||
r.opt.FileSystem.Walk(dir, func(path string, info os.FileInfo, err error) error {
|
r.opt.FileSystem.Walk(dir, func(path string, info os.FileInfo, err error) error {
|
||||||
@@ -227,7 +227,7 @@ func (r *Render) compileTemplatesFromDir() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
name := (rel[0 : len(rel)-len(ext)])
|
name := (rel[0 : len(rel)-len(ext)])
|
||||||
tmpl := r.templates.New(filepath.ToSlash(name))
|
tmpl := tmpTemplates.New(filepath.ToSlash(name))
|
||||||
|
|
||||||
// Add our funcmaps.
|
// Add our funcmaps.
|
||||||
for _, funcs := range r.opt.Funcs {
|
for _, funcs := range r.opt.Funcs {
|
||||||
@@ -241,12 +241,16 @@ func (r *Render) compileTemplatesFromDir() {
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|
||||||
|
r.templatesLk.Lock()
|
||||||
|
r.templates = tmpTemplates
|
||||||
|
r.templatesLk.Unlock()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *Render) compileTemplatesFromAsset() {
|
func (r *Render) compileTemplatesFromAsset() {
|
||||||
dir := r.opt.Directory
|
dir := r.opt.Directory
|
||||||
r.templates = template.New(dir)
|
tmpTemplates := template.New(dir)
|
||||||
r.templates.Delims(r.opt.Delims.Left, r.opt.Delims.Right)
|
tmpTemplates.Delims(r.opt.Delims.Left, r.opt.Delims.Right)
|
||||||
|
|
||||||
for _, path := range r.opt.AssetNames() {
|
for _, path := range r.opt.AssetNames() {
|
||||||
if !strings.HasPrefix(path, dir) {
|
if !strings.HasPrefix(path, dir) {
|
||||||
@@ -272,7 +276,7 @@ func (r *Render) compileTemplatesFromAsset() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
name := (rel[0 : len(rel)-len(ext)])
|
name := (rel[0 : len(rel)-len(ext)])
|
||||||
tmpl := r.templates.New(filepath.ToSlash(name))
|
tmpl := tmpTemplates.New(filepath.ToSlash(name))
|
||||||
|
|
||||||
// Add our funcmaps.
|
// Add our funcmaps.
|
||||||
for _, funcs := range r.opt.Funcs {
|
for _, funcs := range r.opt.Funcs {
|
||||||
@@ -285,6 +289,10 @@ func (r *Render) compileTemplatesFromAsset() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
r.templatesLk.Lock()
|
||||||
|
r.templates = tmpTemplates
|
||||||
|
r.templatesLk.Unlock()
|
||||||
}
|
}
|
||||||
|
|
||||||
// TemplateLookup is a wrapper around template.Lookup and returns
|
// TemplateLookup is a wrapper around template.Lookup and returns
|
||||||
@@ -389,14 +397,15 @@ func (r *Render) Data(w io.Writer, status int, v []byte) error {
|
|||||||
|
|
||||||
// HTML builds up the response from the specified template and bindings.
|
// HTML builds up the response from the specified template and bindings.
|
||||||
func (r *Render) HTML(w io.Writer, status int, name string, binding interface{}, htmlOpt ...HTMLOptions) error {
|
func (r *Render) HTML(w io.Writer, status int, name string, binding interface{}, htmlOpt ...HTMLOptions) error {
|
||||||
r.templatesLk.Lock()
|
|
||||||
defer r.templatesLk.Unlock()
|
|
||||||
|
|
||||||
// If we are in development mode, recompile the templates on every HTML request.
|
// If we are in development mode, recompile the templates on every HTML request.
|
||||||
if r.opt.IsDevelopment {
|
if r.opt.IsDevelopment {
|
||||||
r.compileTemplates()
|
r.compileTemplates()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
r.templatesLk.RLock()
|
||||||
|
defer r.templatesLk.RUnlock()
|
||||||
|
|
||||||
opt := r.prepareHTMLOptions(htmlOpt)
|
opt := r.prepareHTMLOptions(htmlOpt)
|
||||||
if tpl := r.templates.Lookup(name); tpl != nil {
|
if tpl := r.templates.Lookup(name); tpl != nil {
|
||||||
if len(opt.Layout) > 0 {
|
if len(opt.Layout) > 0 {
|
||||||
|
|||||||
4
vendor/modules.txt
vendored
4
vendor/modules.txt
vendored
@@ -777,7 +777,7 @@ github.com/unknwon/i18n
|
|||||||
# github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae
|
# github.com/unknwon/paginater v0.0.0-20200328080006-042474bd0eae
|
||||||
## explicit
|
## explicit
|
||||||
github.com/unknwon/paginater
|
github.com/unknwon/paginater
|
||||||
# github.com/unrolled/render v1.1.0
|
# github.com/unrolled/render v1.1.1
|
||||||
## explicit
|
## explicit
|
||||||
github.com/unrolled/render
|
github.com/unrolled/render
|
||||||
# github.com/urfave/cli v1.22.5
|
# github.com/urfave/cli v1.22.5
|
||||||
@@ -1048,7 +1048,7 @@ strk.kbt.io/projects/go/libravatar
|
|||||||
# xorm.io/builder v0.3.9
|
# xorm.io/builder v0.3.9
|
||||||
## explicit
|
## explicit
|
||||||
xorm.io/builder
|
xorm.io/builder
|
||||||
# xorm.io/xorm v1.0.7
|
# xorm.io/xorm v1.1.0
|
||||||
## explicit
|
## explicit
|
||||||
xorm.io/xorm
|
xorm.io/xorm
|
||||||
xorm.io/xorm/caches
|
xorm.io/xorm/caches
|
||||||
|
|||||||
790
vendor/xorm.io/xorm/.drone.yml
generated
vendored
790
vendor/xorm.io/xorm/.drone.yml
generated
vendored
@@ -2,58 +2,288 @@
|
|||||||
kind: pipeline
|
kind: pipeline
|
||||||
name: testing
|
name: testing
|
||||||
steps:
|
steps:
|
||||||
|
- name: restore-cache
|
||||||
|
image: meltwater/drone-cache
|
||||||
|
pull: always
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
restore: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
- name: test-vet
|
- name: test-vet
|
||||||
image: golang:1.11 # The lowest golang requirement
|
image: golang:1.15
|
||||||
environment:
|
environment:
|
||||||
GO111MODULE: "on"
|
GO111MODULE: "on"
|
||||||
GOPROXY: "https://goproxy.cn"
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
commands:
|
commands:
|
||||||
- make vet
|
- make vet
|
||||||
- make test
|
|
||||||
- make fmt-check
|
- make fmt-check
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
when:
|
when:
|
||||||
event:
|
event:
|
||||||
- push
|
- push
|
||||||
- pull_request
|
- pull_request
|
||||||
|
|
||||||
- name: test-sqlite
|
- name: rebuild-cache
|
||||||
image: golang:1.12
|
image: meltwater/drone-cache
|
||||||
|
pull: true
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
rebuild: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
temp: {}
|
||||||
|
|
||||||
|
---
|
||||||
|
kind: pipeline
|
||||||
|
name: test-sqlite
|
||||||
|
depends_on:
|
||||||
|
- testing
|
||||||
|
steps:
|
||||||
|
- name: restore-cache
|
||||||
|
image: meltwater/drone-cache:dev
|
||||||
|
pull: always
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
restore: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
|
- name: test-sqlite3
|
||||||
|
image: golang:1.15
|
||||||
environment:
|
environment:
|
||||||
GO111MODULE: "on"
|
GO111MODULE: "on"
|
||||||
GOPROXY: "https://goproxy.cn"
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
|
commands:
|
||||||
|
- make test-sqlite3
|
||||||
|
- TEST_CACHE_ENABLE=true make test-sqlite3
|
||||||
|
- TEST_QUOTE_POLICY=reserved make test-sqlite3
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
|
- name: test-sqlite
|
||||||
|
image: golang:1.15
|
||||||
|
environment:
|
||||||
|
GO111MODULE: "on"
|
||||||
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
commands:
|
commands:
|
||||||
- make test-sqlite
|
- make test-sqlite
|
||||||
- TEST_CACHE_ENABLE=true make test-sqlite
|
- TEST_CACHE_ENABLE=true make test-sqlite
|
||||||
- TEST_QUOTE_POLICY=reserved make test-sqlite
|
- TEST_QUOTE_POLICY=reserved make test-sqlite
|
||||||
when:
|
volumes:
|
||||||
event:
|
- name: cache
|
||||||
- push
|
path: /go
|
||||||
- pull_request
|
|
||||||
|
- name: rebuild-cache
|
||||||
|
image: meltwater/drone-cache:dev
|
||||||
|
pull: true
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
rebuild: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
temp: {}
|
||||||
|
|
||||||
|
---
|
||||||
|
kind: pipeline
|
||||||
|
name: test-mysql
|
||||||
|
depends_on:
|
||||||
|
- testing
|
||||||
|
steps:
|
||||||
|
- name: restore-cache
|
||||||
|
image: meltwater/drone-cache
|
||||||
|
pull: always
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
restore: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
- name: test-mysql
|
- name: test-mysql
|
||||||
image: golang:1.12
|
image: golang:1.15
|
||||||
environment:
|
environment:
|
||||||
GO111MODULE: "on"
|
GO111MODULE: "on"
|
||||||
GOPROXY: "https://goproxy.cn"
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
TEST_MYSQL_HOST: mysql
|
TEST_MYSQL_HOST: mysql
|
||||||
TEST_MYSQL_CHARSET: utf8
|
TEST_MYSQL_CHARSET: utf8
|
||||||
TEST_MYSQL_DBNAME: xorm_test
|
TEST_MYSQL_DBNAME: xorm_test
|
||||||
TEST_MYSQL_USERNAME: root
|
TEST_MYSQL_USERNAME: root
|
||||||
TEST_MYSQL_PASSWORD:
|
TEST_MYSQL_PASSWORD:
|
||||||
commands:
|
commands:
|
||||||
|
- make test
|
||||||
|
- make test-mysql
|
||||||
|
- TEST_CACHE_ENABLE=true make test-mysql
|
||||||
|
- TEST_QUOTE_POLICY=reserved make test-mysql
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
|
- name: test-mysql-utf8mb4
|
||||||
|
image: golang:1.15
|
||||||
|
depends_on:
|
||||||
|
- test-mysql
|
||||||
|
environment:
|
||||||
|
GO111MODULE: "on"
|
||||||
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
|
TEST_MYSQL_HOST: mysql
|
||||||
|
TEST_MYSQL_CHARSET: utf8mb4
|
||||||
|
TEST_MYSQL_DBNAME: xorm_test
|
||||||
|
TEST_MYSQL_USERNAME: root
|
||||||
|
TEST_MYSQL_PASSWORD:
|
||||||
|
commands:
|
||||||
- make test-mysql
|
- make test-mysql
|
||||||
- TEST_CACHE_ENABLE=true make test-mysql
|
- TEST_CACHE_ENABLE=true make test-mysql
|
||||||
- TEST_QUOTE_POLICY=reserved make test-mysql
|
- TEST_QUOTE_POLICY=reserved make test-mysql
|
||||||
when:
|
volumes:
|
||||||
event:
|
- name: cache
|
||||||
- push
|
path: /go
|
||||||
- pull_request
|
|
||||||
|
|
||||||
- name: test-mysql8
|
- name: test-mymysql
|
||||||
image: golang:1.12
|
pull: default
|
||||||
|
image: golang:1.15
|
||||||
|
depends_on:
|
||||||
|
- test-mysql-utf8mb4
|
||||||
environment:
|
environment:
|
||||||
GO111MODULE: "on"
|
GO111MODULE: "on"
|
||||||
GOPROXY: "https://goproxy.cn"
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
|
TEST_MYSQL_HOST: mysql:3306
|
||||||
|
TEST_MYSQL_DBNAME: xorm_test
|
||||||
|
TEST_MYSQL_USERNAME: root
|
||||||
|
TEST_MYSQL_PASSWORD:
|
||||||
|
commands:
|
||||||
|
- make test-mymysql
|
||||||
|
- TEST_CACHE_ENABLE=true make test-mymysql
|
||||||
|
- TEST_QUOTE_POLICY=reserved make test-mymysql
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
|
- name: rebuild-cache
|
||||||
|
image: meltwater/drone-cache
|
||||||
|
depends_on:
|
||||||
|
- test-mysql
|
||||||
|
- test-mysql-utf8mb4
|
||||||
|
- test-mymysql
|
||||||
|
pull: true
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
rebuild: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
temp: {}
|
||||||
|
|
||||||
|
services:
|
||||||
|
- name: mysql
|
||||||
|
pull: default
|
||||||
|
image: mysql:5.7
|
||||||
|
environment:
|
||||||
|
MYSQL_ALLOW_EMPTY_PASSWORD: yes
|
||||||
|
MYSQL_DATABASE: xorm_test
|
||||||
|
|
||||||
|
---
|
||||||
|
kind: pipeline
|
||||||
|
name: test-mysql8
|
||||||
|
depends_on:
|
||||||
|
- test-mysql
|
||||||
|
- test-sqlite
|
||||||
|
steps:
|
||||||
|
- name: restore-cache
|
||||||
|
image: meltwater/drone-cache
|
||||||
|
pull: always
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
restore: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
|
- name: test-mysql8
|
||||||
|
image: golang:1.15
|
||||||
|
environment:
|
||||||
|
GO111MODULE: "on"
|
||||||
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
TEST_MYSQL_HOST: mysql8
|
TEST_MYSQL_HOST: mysql8
|
||||||
TEST_MYSQL_CHARSET: utf8mb4
|
TEST_MYSQL_CHARSET: utf8mb4
|
||||||
TEST_MYSQL_DBNAME: xorm_test
|
TEST_MYSQL_DBNAME: xorm_test
|
||||||
@@ -63,58 +293,70 @@ steps:
|
|||||||
- make test-mysql
|
- make test-mysql
|
||||||
- TEST_CACHE_ENABLE=true make test-mysql
|
- TEST_CACHE_ENABLE=true make test-mysql
|
||||||
- TEST_QUOTE_POLICY=reserved make test-mysql
|
- TEST_QUOTE_POLICY=reserved make test-mysql
|
||||||
when:
|
volumes:
|
||||||
event:
|
- name: cache
|
||||||
- push
|
path: /go
|
||||||
- pull_request
|
|
||||||
|
|
||||||
- name: test-mysql-utf8mb4
|
- name: rebuild-cache
|
||||||
image: golang:1.12
|
image: meltwater/drone-cache:dev
|
||||||
|
pull: true
|
||||||
depends_on:
|
depends_on:
|
||||||
- test-mysql
|
- test-mysql8
|
||||||
environment:
|
settings:
|
||||||
GO111MODULE: "on"
|
backend: "filesystem"
|
||||||
GOPROXY: "https://goproxy.cn"
|
rebuild: true
|
||||||
TEST_MYSQL_HOST: mysql
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
TEST_MYSQL_CHARSET: utf8mb4
|
archive_format: "gzip"
|
||||||
TEST_MYSQL_DBNAME: xorm_test
|
filesystem_cache_root: "/go"
|
||||||
TEST_MYSQL_USERNAME: root
|
mount:
|
||||||
TEST_MYSQL_PASSWORD:
|
- pkg.mod
|
||||||
commands:
|
- pkg.build
|
||||||
- make test-mysql
|
volumes:
|
||||||
- TEST_CACHE_ENABLE=true make test-mysql
|
- name: cache
|
||||||
- TEST_QUOTE_POLICY=reserved make test-mysql
|
path: /go
|
||||||
when:
|
|
||||||
event:
|
|
||||||
- push
|
|
||||||
- pull_request
|
|
||||||
|
|
||||||
- name: test-mymysql
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
temp: {}
|
||||||
|
|
||||||
|
services:
|
||||||
|
- name: mysql8
|
||||||
pull: default
|
pull: default
|
||||||
image: golang:1.12
|
image: mysql:8.0
|
||||||
depends_on:
|
|
||||||
- test-mysql-utf8mb4
|
|
||||||
environment:
|
environment:
|
||||||
GO111MODULE: "on"
|
MYSQL_ALLOW_EMPTY_PASSWORD: yes
|
||||||
GOPROXY: "https://goproxy.cn"
|
MYSQL_DATABASE: xorm_test
|
||||||
TEST_MYSQL_HOST: mysql:3306
|
|
||||||
TEST_MYSQL_DBNAME: xorm_test
|
---
|
||||||
TEST_MYSQL_USERNAME: root
|
kind: pipeline
|
||||||
TEST_MYSQL_PASSWORD:
|
name: test-mariadb
|
||||||
commands:
|
depends_on:
|
||||||
- make test-mymysql
|
- test-mysql8
|
||||||
- TEST_CACHE_ENABLE=true make test-mymysql
|
steps:
|
||||||
- TEST_QUOTE_POLICY=reserved make test-mymysql
|
- name: restore-cache
|
||||||
when:
|
image: meltwater/drone-cache
|
||||||
event:
|
pull: always
|
||||||
- push
|
settings:
|
||||||
- pull_request
|
backend: "filesystem"
|
||||||
|
restore: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
- name: test-mariadb
|
- name: test-mariadb
|
||||||
image: golang:1.12
|
image: golang:1.15
|
||||||
environment:
|
environment:
|
||||||
GO111MODULE: "on"
|
GO111MODULE: "on"
|
||||||
GOPROXY: "https://goproxy.cn"
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
TEST_MYSQL_HOST: mariadb
|
TEST_MYSQL_HOST: mariadb
|
||||||
TEST_MYSQL_CHARSET: utf8mb4
|
TEST_MYSQL_CHARSET: utf8mb4
|
||||||
TEST_MYSQL_DBNAME: xorm_test
|
TEST_MYSQL_DBNAME: xorm_test
|
||||||
@@ -124,17 +366,71 @@ steps:
|
|||||||
- make test-mysql
|
- make test-mysql
|
||||||
- TEST_CACHE_ENABLE=true make test-mysql
|
- TEST_CACHE_ENABLE=true make test-mysql
|
||||||
- TEST_QUOTE_POLICY=reserved make test-mysql
|
- TEST_QUOTE_POLICY=reserved make test-mysql
|
||||||
when:
|
volumes:
|
||||||
event:
|
- name: cache
|
||||||
- push
|
path: /go
|
||||||
- pull_request
|
|
||||||
|
- name: rebuild-cache
|
||||||
|
image: meltwater/drone-cache:dev
|
||||||
|
depends_on:
|
||||||
|
- test-mariadb
|
||||||
|
pull: true
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
rebuild: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
temp: {}
|
||||||
|
|
||||||
|
services:
|
||||||
|
- name: mariadb
|
||||||
|
pull: default
|
||||||
|
image: mariadb:10.4
|
||||||
|
environment:
|
||||||
|
MYSQL_ALLOW_EMPTY_PASSWORD: yes
|
||||||
|
MYSQL_DATABASE: xorm_test
|
||||||
|
|
||||||
|
---
|
||||||
|
kind: pipeline
|
||||||
|
name: test-postgres
|
||||||
|
depends_on:
|
||||||
|
- test-mariadb
|
||||||
|
steps:
|
||||||
|
- name: restore-cache
|
||||||
|
image: meltwater/drone-cache
|
||||||
|
pull: always
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
restore: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
- name: test-postgres
|
- name: test-postgres
|
||||||
pull: default
|
pull: default
|
||||||
image: golang:1.12
|
image: golang:1.15
|
||||||
environment:
|
environment:
|
||||||
GO111MODULE: "on"
|
GO111MODULE: "on"
|
||||||
GOPROXY: "https://goproxy.cn"
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
TEST_PGSQL_HOST: pgsql
|
TEST_PGSQL_HOST: pgsql
|
||||||
TEST_PGSQL_DBNAME: xorm_test
|
TEST_PGSQL_DBNAME: xorm_test
|
||||||
TEST_PGSQL_USERNAME: postgres
|
TEST_PGSQL_USERNAME: postgres
|
||||||
@@ -143,19 +439,21 @@ steps:
|
|||||||
- make test-postgres
|
- make test-postgres
|
||||||
- TEST_CACHE_ENABLE=true make test-postgres
|
- TEST_CACHE_ENABLE=true make test-postgres
|
||||||
- TEST_QUOTE_POLICY=reserved make test-postgres
|
- TEST_QUOTE_POLICY=reserved make test-postgres
|
||||||
when:
|
volumes:
|
||||||
event:
|
- name: cache
|
||||||
- push
|
path: /go
|
||||||
- pull_request
|
|
||||||
|
|
||||||
- name: test-postgres-schema
|
- name: test-postgres-schema
|
||||||
pull: default
|
pull: default
|
||||||
image: golang:1.12
|
image: golang:1.15
|
||||||
depends_on:
|
depends_on:
|
||||||
- test-postgres
|
- test-postgres
|
||||||
environment:
|
environment:
|
||||||
GO111MODULE: "on"
|
GO111MODULE: "on"
|
||||||
GOPROXY: "https://goproxy.cn"
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
TEST_PGSQL_HOST: pgsql
|
TEST_PGSQL_HOST: pgsql
|
||||||
TEST_PGSQL_SCHEMA: xorm
|
TEST_PGSQL_SCHEMA: xorm
|
||||||
TEST_PGSQL_DBNAME: xorm_test
|
TEST_PGSQL_DBNAME: xorm_test
|
||||||
@@ -165,17 +463,72 @@ steps:
|
|||||||
- make test-postgres
|
- make test-postgres
|
||||||
- TEST_CACHE_ENABLE=true make test-postgres
|
- TEST_CACHE_ENABLE=true make test-postgres
|
||||||
- TEST_QUOTE_POLICY=reserved make test-postgres
|
- TEST_QUOTE_POLICY=reserved make test-postgres
|
||||||
when:
|
volumes:
|
||||||
event:
|
- name: cache
|
||||||
- push
|
path: /go
|
||||||
- pull_request
|
|
||||||
|
- name: rebuild-cache
|
||||||
|
image: meltwater/drone-cache:dev
|
||||||
|
pull: true
|
||||||
|
depends_on:
|
||||||
|
- test-postgres-schema
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
rebuild: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
temp: {}
|
||||||
|
|
||||||
|
services:
|
||||||
|
- name: pgsql
|
||||||
|
pull: default
|
||||||
|
image: postgres:9.5
|
||||||
|
environment:
|
||||||
|
POSTGRES_DB: xorm_test
|
||||||
|
POSTGRES_USER: postgres
|
||||||
|
POSTGRES_PASSWORD: postgres
|
||||||
|
|
||||||
|
---
|
||||||
|
kind: pipeline
|
||||||
|
name: test-mssql
|
||||||
|
depends_on:
|
||||||
|
- test-postgres
|
||||||
|
steps:
|
||||||
|
- name: restore-cache
|
||||||
|
image: meltwater/drone-cache
|
||||||
|
pull: always
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
restore: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
- name: test-mssql
|
- name: test-mssql
|
||||||
pull: default
|
pull: default
|
||||||
image: golang:1.12
|
image: golang:1.15
|
||||||
environment:
|
environment:
|
||||||
GO111MODULE: "on"
|
GO111MODULE: "on"
|
||||||
GOPROXY: "https://goproxy.cn"
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
TEST_MSSQL_HOST: mssql
|
TEST_MSSQL_HOST: mssql
|
||||||
TEST_MSSQL_DBNAME: xorm_test
|
TEST_MSSQL_DBNAME: xorm_test
|
||||||
TEST_MSSQL_USERNAME: sa
|
TEST_MSSQL_USERNAME: sa
|
||||||
@@ -185,17 +538,70 @@ steps:
|
|||||||
- TEST_CACHE_ENABLE=true make test-mssql
|
- TEST_CACHE_ENABLE=true make test-mssql
|
||||||
- TEST_QUOTE_POLICY=reserved make test-mssql
|
- TEST_QUOTE_POLICY=reserved make test-mssql
|
||||||
- TEST_MSSQL_DEFAULT_VARCHAR=NVARCHAR TEST_MSSQL_DEFAULT_CHAR=NCHAR make test-mssql
|
- TEST_MSSQL_DEFAULT_VARCHAR=NVARCHAR TEST_MSSQL_DEFAULT_CHAR=NCHAR make test-mssql
|
||||||
when:
|
volumes:
|
||||||
event:
|
- name: cache
|
||||||
- push
|
path: /go
|
||||||
- pull_request
|
|
||||||
|
- name: rebuild-cache
|
||||||
|
image: meltwater/drone-cache:dev
|
||||||
|
pull: true
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
rebuild: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
temp: {}
|
||||||
|
|
||||||
|
services:
|
||||||
|
- name: mssql
|
||||||
|
pull: default
|
||||||
|
image: microsoft/mssql-server-linux:latest
|
||||||
|
environment:
|
||||||
|
ACCEPT_EULA: Y
|
||||||
|
SA_PASSWORD: yourStrong(!)Password
|
||||||
|
MSSQL_PID: Developer
|
||||||
|
|
||||||
|
---
|
||||||
|
kind: pipeline
|
||||||
|
name: test-tidb
|
||||||
|
depends_on:
|
||||||
|
- test-mssql
|
||||||
|
steps:
|
||||||
|
- name: restore-cache
|
||||||
|
image: meltwater/drone-cache
|
||||||
|
pull: always
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
restore: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
- name: test-tidb
|
- name: test-tidb
|
||||||
pull: default
|
pull: default
|
||||||
image: golang:1.12
|
image: golang:1.15
|
||||||
environment:
|
environment:
|
||||||
GO111MODULE: "on"
|
GO111MODULE: "on"
|
||||||
GOPROXY: "https://goproxy.cn"
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
TEST_TIDB_HOST: "tidb:4000"
|
TEST_TIDB_HOST: "tidb:4000"
|
||||||
TEST_TIDB_DBNAME: xorm_test
|
TEST_TIDB_DBNAME: xorm_test
|
||||||
TEST_TIDB_USERNAME: root
|
TEST_TIDB_USERNAME: root
|
||||||
@@ -204,17 +610,66 @@ steps:
|
|||||||
- make test-tidb
|
- make test-tidb
|
||||||
- TEST_CACHE_ENABLE=true make test-tidb
|
- TEST_CACHE_ENABLE=true make test-tidb
|
||||||
- TEST_QUOTE_POLICY=reserved make test-tidb
|
- TEST_QUOTE_POLICY=reserved make test-tidb
|
||||||
when:
|
volumes:
|
||||||
event:
|
- name: cache
|
||||||
- push
|
path: /go
|
||||||
- pull_request
|
|
||||||
|
- name: rebuild-cache
|
||||||
|
image: meltwater/drone-cache:dev
|
||||||
|
pull: true
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
rebuild: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
temp: {}
|
||||||
|
|
||||||
|
services:
|
||||||
|
- name: tidb
|
||||||
|
pull: default
|
||||||
|
image: pingcap/tidb:v3.0.3
|
||||||
|
|
||||||
|
---
|
||||||
|
kind: pipeline
|
||||||
|
name: test-cockroach
|
||||||
|
depends_on:
|
||||||
|
- test-tidb
|
||||||
|
steps:
|
||||||
|
- name: restore-cache
|
||||||
|
image: meltwater/drone-cache
|
||||||
|
pull: always
|
||||||
|
settings:
|
||||||
|
backend: "filesystem"
|
||||||
|
restore: true
|
||||||
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
|
archive_format: "gzip"
|
||||||
|
filesystem_cache_root: "/go"
|
||||||
|
mount:
|
||||||
|
- pkg.mod
|
||||||
|
- pkg.build
|
||||||
|
volumes:
|
||||||
|
- name: cache
|
||||||
|
path: /go
|
||||||
|
|
||||||
- name: test-cockroach
|
- name: test-cockroach
|
||||||
pull: default
|
pull: default
|
||||||
image: golang:1.13
|
image: golang:1.15
|
||||||
environment:
|
environment:
|
||||||
GO111MODULE: "on"
|
GO111MODULE: "on"
|
||||||
GOPROXY: "https://goproxy.cn"
|
GOPROXY: "https://goproxy.io"
|
||||||
|
CGO_ENABLED: 1
|
||||||
|
GOMODCACHE: '/drone/src/pkg.mod'
|
||||||
|
GOCACHE: '/drone/src/pkg.build'
|
||||||
TEST_COCKROACH_HOST: "cockroach:26257"
|
TEST_COCKROACH_HOST: "cockroach:26257"
|
||||||
TEST_COCKROACH_DBNAME: xorm_test
|
TEST_COCKROACH_DBNAME: xorm_test
|
||||||
TEST_COCKROACH_USERNAME: root
|
TEST_COCKROACH_USERNAME: root
|
||||||
@@ -223,115 +678,62 @@ steps:
|
|||||||
- sleep 10
|
- sleep 10
|
||||||
- make test-cockroach
|
- make test-cockroach
|
||||||
- TEST_CACHE_ENABLE=true make test-cockroach
|
- TEST_CACHE_ENABLE=true make test-cockroach
|
||||||
when:
|
volumes:
|
||||||
event:
|
- name: cache
|
||||||
- push
|
path: /go
|
||||||
- pull_request
|
|
||||||
|
|
||||||
- name: merge_coverage
|
- name: rebuild-cache
|
||||||
pull: default
|
image: meltwater/drone-cache:dev
|
||||||
image: golang:1.12
|
pull: true
|
||||||
environment:
|
settings:
|
||||||
GO111MODULE: "on"
|
backend: "filesystem"
|
||||||
GOPROXY: "https://goproxy.cn"
|
rebuild: true
|
||||||
depends_on:
|
cache_key: '{{ .Repo.Name }}_{{ checksum "go.mod" }}_{{ checksum "go.sum" }}_{{ arch }}_{{ os }}'
|
||||||
- test-vet
|
archive_format: "gzip"
|
||||||
- test-sqlite
|
filesystem_cache_root: "/go"
|
||||||
- test-mysql
|
mount:
|
||||||
- test-mysql8
|
- pkg.mod
|
||||||
- test-mymysql
|
- pkg.build
|
||||||
- test-postgres
|
volumes:
|
||||||
- test-postgres-schema
|
- name: cache
|
||||||
- test-mssql
|
path: /go
|
||||||
- test-tidb
|
|
||||||
- test-cockroach
|
volumes:
|
||||||
commands:
|
- name: cache
|
||||||
- make coverage
|
temp: {}
|
||||||
when:
|
|
||||||
event:
|
|
||||||
- push
|
|
||||||
- pull_request
|
|
||||||
|
|
||||||
services:
|
services:
|
||||||
|
|
||||||
- name: mysql
|
|
||||||
pull: default
|
|
||||||
image: mysql:5.7
|
|
||||||
environment:
|
|
||||||
MYSQL_ALLOW_EMPTY_PASSWORD: yes
|
|
||||||
MYSQL_DATABASE: xorm_test
|
|
||||||
when:
|
|
||||||
event:
|
|
||||||
- push
|
|
||||||
- tag
|
|
||||||
- pull_request
|
|
||||||
|
|
||||||
- name: mysql8
|
|
||||||
pull: default
|
|
||||||
image: mysql:8.0
|
|
||||||
environment:
|
|
||||||
MYSQL_ALLOW_EMPTY_PASSWORD: yes
|
|
||||||
MYSQL_DATABASE: xorm_test
|
|
||||||
when:
|
|
||||||
event:
|
|
||||||
- push
|
|
||||||
- tag
|
|
||||||
- pull_request
|
|
||||||
|
|
||||||
- name: mariadb
|
|
||||||
pull: default
|
|
||||||
image: mariadb:10.4
|
|
||||||
environment:
|
|
||||||
MYSQL_ALLOW_EMPTY_PASSWORD: yes
|
|
||||||
MYSQL_DATABASE: xorm_test
|
|
||||||
when:
|
|
||||||
event:
|
|
||||||
- push
|
|
||||||
- tag
|
|
||||||
- pull_request
|
|
||||||
|
|
||||||
- name: pgsql
|
|
||||||
pull: default
|
|
||||||
image: postgres:9.5
|
|
||||||
environment:
|
|
||||||
POSTGRES_DB: xorm_test
|
|
||||||
POSTGRES_USER: postgres
|
|
||||||
POSTGRES_PASSWORD: postgres
|
|
||||||
when:
|
|
||||||
event:
|
|
||||||
- push
|
|
||||||
- tag
|
|
||||||
- pull_request
|
|
||||||
|
|
||||||
- name: mssql
|
|
||||||
pull: default
|
|
||||||
image: microsoft/mssql-server-linux:latest
|
|
||||||
environment:
|
|
||||||
ACCEPT_EULA: Y
|
|
||||||
SA_PASSWORD: yourStrong(!)Password
|
|
||||||
MSSQL_PID: Developer
|
|
||||||
when:
|
|
||||||
event:
|
|
||||||
- push
|
|
||||||
- tag
|
|
||||||
- pull_request
|
|
||||||
|
|
||||||
- name: tidb
|
|
||||||
pull: default
|
|
||||||
image: pingcap/tidb:v3.0.3
|
|
||||||
when:
|
|
||||||
event:
|
|
||||||
- push
|
|
||||||
- tag
|
|
||||||
- pull_request
|
|
||||||
|
|
||||||
- name: cockroach
|
- name: cockroach
|
||||||
pull: default
|
pull: default
|
||||||
image: cockroachdb/cockroach:v19.2.4
|
image: cockroachdb/cockroach:v19.2.4
|
||||||
commands:
|
commands:
|
||||||
- /cockroach/cockroach start --insecure
|
- /cockroach/cockroach start --insecure
|
||||||
|
|
||||||
|
---
|
||||||
|
kind: pipeline
|
||||||
|
name: merge_coverage
|
||||||
|
depends_on:
|
||||||
|
- testing
|
||||||
|
- test-sqlite
|
||||||
|
- test-mysql
|
||||||
|
- test-mysql8
|
||||||
|
- test-mariadb
|
||||||
|
- test-postgres
|
||||||
|
- test-mssql
|
||||||
|
- test-tidb
|
||||||
|
- test-cockroach
|
||||||
|
steps:
|
||||||
|
- name: merge_coverage
|
||||||
|
pull: default
|
||||||
|
image: golang:1.15
|
||||||
|
environment:
|
||||||
|
GO111MODULE: "on"
|
||||||
|
GOPROXY: "https://goproxy.io"
|
||||||
|
commands:
|
||||||
|
- make coverage
|
||||||
when:
|
when:
|
||||||
|
branch:
|
||||||
|
- master
|
||||||
event:
|
event:
|
||||||
- push
|
- push
|
||||||
- tag
|
- pull_request
|
||||||
- pull_request
|
|
||||||
|
|||||||
1
vendor/xorm.io/xorm/.gitignore
generated
vendored
1
vendor/xorm.io/xorm/.gitignore
generated
vendored
@@ -36,3 +36,4 @@ test.db.sql
|
|||||||
*coverage.out
|
*coverage.out
|
||||||
test.db
|
test.db
|
||||||
integrations/*.sql
|
integrations/*.sql
|
||||||
|
integrations/test_sqlite*
|
||||||
4
vendor/xorm.io/xorm/.revive.toml
generated
vendored
4
vendor/xorm.io/xorm/.revive.toml
generated
vendored
@@ -15,6 +15,7 @@ warningCode = 1
|
|||||||
[rule.if-return]
|
[rule.if-return]
|
||||||
[rule.increment-decrement]
|
[rule.increment-decrement]
|
||||||
[rule.var-naming]
|
[rule.var-naming]
|
||||||
|
arguments = [["ID", "UID", "UUID", "URL", "JSON"], []]
|
||||||
[rule.var-declaration]
|
[rule.var-declaration]
|
||||||
[rule.package-comments]
|
[rule.package-comments]
|
||||||
[rule.range]
|
[rule.range]
|
||||||
@@ -22,4 +23,5 @@ warningCode = 1
|
|||||||
[rule.time-naming]
|
[rule.time-naming]
|
||||||
[rule.unexported-return]
|
[rule.unexported-return]
|
||||||
[rule.indent-error-flow]
|
[rule.indent-error-flow]
|
||||||
[rule.errorf]
|
[rule.errorf]
|
||||||
|
[rule.struct-tag]
|
||||||
15
vendor/xorm.io/xorm/CHANGELOG.md
generated
vendored
15
vendor/xorm.io/xorm/CHANGELOG.md
generated
vendored
@@ -3,6 +3,21 @@
|
|||||||
This changelog goes through all the changes that have been made in each release
|
This changelog goes through all the changes that have been made in each release
|
||||||
without substantial changes to our git log.
|
without substantial changes to our git log.
|
||||||
|
|
||||||
|
## [1.1.0](https://gitea.com/xorm/xorm/releases/tag/1.1.0) - 2021-05-14
|
||||||
|
|
||||||
|
* FEATURES
|
||||||
|
* Unsigned Support for mysql (#1889)
|
||||||
|
* Support modernc.org/sqlite (#1850)
|
||||||
|
* TESTING
|
||||||
|
* More tests (#1890)
|
||||||
|
* MISC
|
||||||
|
* Byte strings in postgres aren't 0x... (#1906)
|
||||||
|
* Fix another bug with #1872 (#1905)
|
||||||
|
* Fix two issues with dumptables (#1903)
|
||||||
|
* Fix comments (#1896)
|
||||||
|
* Fix comments (#1893)
|
||||||
|
* MariaDB 10.5 adds a suffix on old datatypes (#1885)
|
||||||
|
|
||||||
## [1.0.7](https://gitea.com/xorm/xorm/pulls?q=&type=all&state=closed&milestone=1336) - 2021-01-21
|
## [1.0.7](https://gitea.com/xorm/xorm/pulls?q=&type=all&state=closed&milestone=1336) - 2021-01-21
|
||||||
|
|
||||||
* BUGFIXES
|
* BUGFIXES
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user