Compare commits

...

25 Commits

Author SHA1 Message Date
6543
83f8414e1e Update Changelog for 1.11.5 (#11347)
* Update Changelog for 1.11.5

* bumb version in docs

* Update CHANGELOG.md

Co-authored-by: zeripath <art27@cantab.net>
2020-05-09 16:26:27 -03:00
guillep2k
0b216f40fd Fix tracked time issues (#11349) (#11354)
Backport #11349 

* Fix tracked time issues (#11349)

* Fix nil exeption: #11313

* fix 500

* activate test 😆

* move logic

* Add missing import

Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: Guillermo Prandi <guillep2k@users.noreply.github.com>
2020-05-09 18:08:41 +01:00
zeripath
dd6e604f8f Add NotifySyncPushCommits to indexer notifier (#11309) (#11338)
Thanks to @simon-on-gh for tracking down the issue.

Fix #11200

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2020-05-08 22:40:51 +01:00
zeripath
86863ae939 Prevent timer leaks in Workerpool and others (#11333) (#11340)
There is a potential memory leak in `Workerpool` due to the intricacies of
`time.Timer` stopping.

Whenever a `time.Timer` is `Stop`ped its channel must be cleared using a
`select` if the result of the `Stop()` is `false`.

Unfortunately in `Workerpool` these were checked the wrong way round.

However, there were a few other places that were not being checked.

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2020-05-09 00:18:39 +08:00
zeripath
f3a90057a5 Allow X in addition to x in tasks (#10979) (#11335)
Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-05-08 20:55:16 +08:00
6543
03fdd82d63 Changelog v1.11.5 (#11329)
* Changelog v1.11.5

* Apply suggestions from code review

Co-authored-by: guillep2k <18600385+guillep2k@users.noreply.github.com>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: guillep2k <18600385+guillep2k@users.noreply.github.com>
2020-05-08 12:58:05 +03:00
6543
cd7fa15d1d Prevent multiple listings of organization when creating a repository (#11303) (#11325)
Backport #11303 

Prevent multiple listings of organization when creating a repository (#11303)

prevent double entries in results of GetOrgsCanCreateRepoByUserID

I first try to only add GroupBy directly but xorm return broken user objects ...

... solution was to just query related UserIDs(OrgIDs) first and return OrgUsers based on this IDs

close #11258

Co-authored-by: zeripath <art27@cantab.net>
2020-05-07 21:30:51 +01:00
6543
79868d7096 When delete tracked time through the API return 404 not 500 (#11319) (#11326) 2020-05-07 22:42:33 +03:00
zeripath
19626b93f8 Manage port in submodule refurl (#11305) (#11323)
* Manage port in submodule refurl

Fix #11304

Signed-off-by: Andrew Thornton <art27@cantab.net>

* fix lint

Signed-off-by: Andrew Thornton <art27@cantab.net>

* URLJoin is causes a cyclic dependency and possibly isn't what what we want anyway

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Protect against leading .. in scp syntax

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: Lauris BH <lauris@nix.lv>

Co-authored-by: Lauris BH <lauris@nix.lv>
2020-05-07 10:41:40 -05:00
zeripath
91e6a7f7ea api.Context.NotFound(...) should tolerate nil (#11288) (#11306)
There is an unfortunate signature change with the api.Context
NotFound function; whereas the normal modules/context/Context
NotFound function requires an error or nil, the api.Context
variant will panic with an NPE if a nil is provided.

This PR will allow api.Context.NotFound to tolerate a being
passed a nil.

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: Lauris BH <lauris@nix.lv>
2020-05-05 17:48:24 -05:00
guillep2k
ff7eaa1eb4 Show pull request selection even when unrelated branches (#11239) (#11283)
Fix #10525

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: guillep2k <18600385+guillep2k@users.noreply.github.com>
2020-05-04 12:38:26 +01:00
Kyle Evans
5131206aad repo: milestone: make /milestone/:id endpoint accessible (#11264) (#11282)
Previously, this required authentication, but there's not actually
any privileged information on this page.  Move the endpoint out of
the group that requires sign-in.  It still requires the ability to
read issues and pull requests, so private repositories (for instance)
will not be exposed.

Fixes #10312 
Fixes #11233

Co-authored-by: guillep2k <18600385+guillep2k@users.noreply.github.com>
2020-05-04 04:12:36 +03:00
6543
bfc25fcf40 Fix GetContents(): Dont't ignore Executables (#11192) (#11209) 2020-04-25 01:54:38 -03:00
zeripath
4a6765fba2 Fix submodule paths when AppSubUrl is not root (#11098) (#11176)
Backport #11098

Fix #11002

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-04-22 13:37:52 +01:00
zeripath
dca8ef9407 Prevent clones and pushes to disabled wiki (#11131) (#11134)
Backport #11131

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-04-19 16:40:40 +01:00
zeripath
cebef5c871 Remove errant third closing curly-bracket from account.tmpl and send account ID in account.tmpl (#11130)
* Remove errant third } from account.tmpl

Fix #11128

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Update templates/user/settings/account.tmpl
2020-04-19 20:35:34 +08:00
6543
245d6ebda5 On Repo Deletion: Delete related TrackedTimes too (#11110) (#11125) 2020-04-19 10:39:48 +08:00
zeripath
d9875ff2e1 Refresh codemirror on show pull comment tab (#11100) (#11122)
Fix #10975

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: guillep2k <18600385+guillep2k@users.noreply.github.com>

Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: guillep2k <18600385+guillep2k@users.noreply.github.com>
2020-04-18 13:55:13 -03:00
6543
cc2a6c1d30 Fix merge dialog on protected branch with missing required statuses (#11074) (#11084)
It is possible for misconfigured protected branches to have required status checks that are not in any of the current statuses: Pending, Success, Error, Failure, or Warning - presumably because the CI has not contacted us as yet.

Fix #10636 by adding case: missing StatusChecks when these are missing
2020-04-16 10:45:34 +03:00
赵智超
b5fd55de73 fix 404 and 500 image size in small size screen (#11043) (#11049)
do it by define Semantic UI image class

Signed-off-by: a1012112796 <1012112796@qq.com>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-04-12 20:24:15 +08:00
6543
e11b3a1076 Load PR Issue Poster on API too (#11033) (#11039)
* Load pr Issue Poster on API too (#11033)

* ajust for 1.11 codebase
2020-04-11 01:10:16 -03:00
6543
0c4be64345 [Backport] Fix release counter on API repository info (#10968) (#10996)
* Fix release counter on API repository info (#10968)

* correct Pull Count to v1.11 Fixtures
2020-04-06 18:13:12 -04:00
zeripath
c34ad62eea Mulitple Gitea Doctor improvements (#10943) (#10990) (#10064) (#9095) (#10991)
* Mulitple Gitea Doctor improvements (#10943)

Backport #10943

* Add `gitea doctor --list` flag to list the checks that will be run, including those by default
* Add `gitea doctor --run` to run specific checks
* Add `gitea doctor --all` to run all checks
* Add db version checker
* Add non-default recalculate merge bases check/fixer to doctor
* Add hook checker (Fix #9878) and ensure hooks are executable (Fix #6319)
* Fix authorized_keys checker - slight change of functionality here because parsing the command is fragile and we should just check if the authorized_keys file is essentially the same as what gitea would produce. (This is still not perfect as order matters - we should probably just md5sum the two files.)
* Add SCRIPT_TYPE check (Fix #10977)
* Add `gitea doctor --fix` to attempt to fix what is possible to easily fix
* Add `gitea doctor --log-file` to set the log-file, be it a file, stdout or to switch off completely. (Fixes previously undetected bug with certain xorm logging configurations - see @6543 comment.)

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Switch to io.Writer instead of io.StringWriter

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-04-06 16:15:20 -04:00
6543
f7d7cf4e2d Fix rebase conflict detection in git 2.26 (#10930)
Git changed the technique used in rebase from
simple apply-patches to use merge. This breaks
our conflict detection code.

created by: Andrew Thornton <art27@cantab.net>

Co-authored-by: Lauris BH <lauris@nix.lv>
2020-04-03 13:09:15 -04:00
zeripath
99a364a9dc Generate Diff and Patch direct from Pull head (#10936) (#10938)
Backport #10936

* Generate Diff and Patch direct from Pull head

Fix #10932
Also fix "Empty Diff/Patch File when pull is merged"

Closes #10934

* Add tests to ensure that diff does not change
* Ensure diffs and pulls pages work if head branch is deleted too

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-04-03 17:06:54 +03:00
44 changed files with 996 additions and 151 deletions

View File

@@ -4,6 +4,34 @@ This changelog goes through all the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.io).
## [1.11.5](https://github.com/go-gitea/gitea/releases/tag/v1.11.5) - 2020-05-09
* BUGFIXES
* Prevent timer leaks in Workerpool and others (#11333) (#11340)
* Fix tracked time issues (#11349) (#11354)
* Add NotifySyncPushCommits to indexer notifier (#11309) (#11338)
* Allow X in addition to x in tasks (#10979) (#11335)
* When delete tracked time through the API return 404 not 500 (#11319) (#11326)
* Prevent duplicate records in organizations list when creating a repository (#11303) (#11325)
* Manage port in submodule refurl (#11305) (#11323)
* api.Context.NotFound(...) should tolerate nil (#11288) (#11306)
* Show pull request selection even when unrelated branches (#11239) (#11283)
* Repo: milestone: make /milestone/:id endpoint accessible (#11264) (#11282)
* Fix GetContents(): Dont't ignore Executables (#11192) (#11209)
* Fix submodule paths when AppSubUrl is not root (#11098) (#11176)
* Prevent clones and pushes to disabled wiki (#11131) (#11134)
* Remove errant third closing curly-bracket from account.tmpl and send account ID in account.tmpl (#11130)
* On Repo Deletion: Delete related TrackedTimes too (#11110) (#11125)
* Refresh codemirror on show pull comment tab (#11100) (#11122)
* Fix merge dialog on protected branch with missing required statuses (#11074) (#11084)
* Load pr Issue Poster on API too (#11033) (#11039)
* Fix release counter on API repository info (#10968) (#10996)
* Generate Diff and Patch direct from Pull head (#10936) (#10938)
* Fix rebase conflict detection in git 2.26 (#10929) (#10930)
* ENHANCEMENT
* Fix 404 and 500 image size in small size screen (#11043) (#11049)
* Multiple Gitea Doctor improvements (#10943) (#10990) (#10064) (#9095) (#10991)
## [1.11.4](https://github.com/go-gitea/gitea/releases/tag/v1.11.4) - 2020-04-01
* BUGFIXES

496
cmd/doctor.go Normal file
View File

@@ -0,0 +1,496 @@
// Copyright 2019 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package cmd
import (
"bufio"
"bytes"
"context"
"fmt"
"io/ioutil"
golog "log"
"os"
"os/exec"
"path/filepath"
"strings"
"text/tabwriter"
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/models/migrations"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/options"
"code.gitea.io/gitea/modules/setting"
"xorm.io/builder"
"github.com/urfave/cli"
)
// CmdDoctor represents the available doctor sub-command.
var CmdDoctor = cli.Command{
Name: "doctor",
Usage: "Diagnose problems",
Description: "A command to diagnose problems with the current Gitea instance according to the given configuration.",
Action: runDoctor,
Flags: []cli.Flag{
cli.BoolFlag{
Name: "list",
Usage: "List the available checks",
},
cli.BoolFlag{
Name: "default",
Usage: "Run the default checks (if neither --run or --all is set, this is the default behaviour)",
},
cli.StringSliceFlag{
Name: "run",
Usage: "Run the provided checks - (if --default is set, the default checks will also run)",
},
cli.BoolFlag{
Name: "all",
Usage: "Run all the available checks",
},
cli.BoolFlag{
Name: "fix",
Usage: "Automatically fix what we can",
},
cli.StringFlag{
Name: "log-file",
Usage: `Name of the log file (default: "doctor.log"). Set to "-" to output to stdout, set to "" to disable`,
},
},
}
type check struct {
title string
name string
isDefault bool
f func(ctx *cli.Context) ([]string, error)
abortIfFailed bool
skipDatabaseInit bool
}
// checklist represents list for all checks
var checklist = []check{
{
// NOTE: this check should be the first in the list
title: "Check paths and basic configuration",
name: "paths",
isDefault: true,
f: runDoctorPathInfo,
abortIfFailed: true,
skipDatabaseInit: true,
},
{
title: "Check Database Version",
name: "check-db",
isDefault: true,
f: runDoctorCheckDBVersion,
abortIfFailed: true,
},
{
title: "Check if OpenSSH authorized_keys file is up-to-date",
name: "authorized_keys",
isDefault: true,
f: runDoctorAuthorizedKeys,
},
{
title: "Check if SCRIPT_TYPE is available",
name: "script-type",
isDefault: false,
f: runDoctorScriptType,
},
{
title: "Check if hook files are up-to-date and executable",
name: "hooks",
isDefault: false,
f: runDoctorHooks,
},
{
title: "Recalculate merge bases",
name: "recalculate_merge_bases",
isDefault: false,
f: runDoctorPRMergeBase,
},
// more checks please append here
}
func runDoctor(ctx *cli.Context) error {
// Silence the default loggers
log.DelNamedLogger("console")
log.DelNamedLogger(log.DEFAULT)
// Now setup our own
logFile := ctx.String("log-file")
if !ctx.IsSet("log-file") {
logFile = "doctor.log"
}
if len(logFile) == 0 {
log.NewLogger(1000, "doctor", "console", `{"level":"NONE","stacktracelevel":"NONE","colorize":"%t"}`)
} else if logFile == "-" {
log.NewLogger(1000, "doctor", "console", `{"level":"trace","stacktracelevel":"NONE"}`)
} else {
log.NewLogger(1000, "doctor", "file", fmt.Sprintf(`{"filename":%q,"level":"trace","stacktracelevel":"NONE"}`, logFile))
}
// Finally redirect the default golog to here
golog.SetFlags(0)
golog.SetPrefix("")
golog.SetOutput(log.NewLoggerAsWriter("INFO", log.GetLogger(log.DEFAULT)))
if ctx.IsSet("list") {
w := tabwriter.NewWriter(os.Stdout, 0, 8, 0, '\t', 0)
_, _ = w.Write([]byte("Default\tName\tTitle\n"))
for _, check := range checklist {
if check.isDefault {
_, _ = w.Write([]byte{'*'})
}
_, _ = w.Write([]byte{'\t'})
_, _ = w.Write([]byte(check.name))
_, _ = w.Write([]byte{'\t'})
_, _ = w.Write([]byte(check.title))
_, _ = w.Write([]byte{'\n'})
}
return w.Flush()
}
var checks []check
if ctx.Bool("all") {
checks = checklist
} else if ctx.IsSet("run") {
addDefault := ctx.Bool("default")
names := ctx.StringSlice("run")
for i, name := range names {
names[i] = strings.ToLower(strings.TrimSpace(name))
}
for _, check := range checklist {
if addDefault && check.isDefault {
checks = append(checks, check)
continue
}
for _, name := range names {
if name == check.name {
checks = append(checks, check)
break
}
}
}
} else {
for _, check := range checklist {
if check.isDefault {
checks = append(checks, check)
}
}
}
dbIsInit := false
for i, check := range checks {
if !dbIsInit && !check.skipDatabaseInit {
// Only open database after the most basic configuration check
setting.EnableXORMLog = false
if err := initDBDisableConsole(true); err != nil {
fmt.Println(err)
fmt.Println("Check if you are using the right config file. You can use a --config directive to specify one.")
return nil
}
dbIsInit = true
}
fmt.Println("[", i+1, "]", check.title)
messages, err := check.f(ctx)
for _, message := range messages {
fmt.Println("-", message)
}
if err != nil {
fmt.Println("Error:", err)
if check.abortIfFailed {
return nil
}
} else {
fmt.Println("OK.")
}
fmt.Println()
}
return nil
}
func runDoctorPathInfo(ctx *cli.Context) ([]string, error) {
res := make([]string, 0, 10)
if fi, err := os.Stat(setting.CustomConf); err != nil || !fi.Mode().IsRegular() {
res = append(res, fmt.Sprintf("Failed to find configuration file at '%s'.", setting.CustomConf))
res = append(res, fmt.Sprintf("If you've never ran Gitea yet, this is normal and '%s' will be created for you on first run.", setting.CustomConf))
res = append(res, "Otherwise check that you are running this command from the correct path and/or provide a `--config` parameter.")
return res, fmt.Errorf("can't proceed without a configuration file")
}
setting.NewContext()
fail := false
check := func(name, path string, is_dir, required, is_write bool) {
res = append(res, fmt.Sprintf("%-25s '%s'", name+":", path))
fi, err := os.Stat(path)
if err != nil {
if os.IsNotExist(err) && ctx.Bool("fix") && is_dir {
if err := os.MkdirAll(path, 0777); err != nil {
res = append(res, fmt.Sprintf(" ERROR: %v", err))
fail = true
return
}
fi, err = os.Stat(path)
}
}
if err != nil {
if required {
res = append(res, fmt.Sprintf(" ERROR: %v", err))
fail = true
return
}
res = append(res, fmt.Sprintf(" NOTICE: not accessible (%v)", err))
return
}
if is_dir && !fi.IsDir() {
res = append(res, " ERROR: not a directory")
fail = true
return
} else if !is_dir && !fi.Mode().IsRegular() {
res = append(res, " ERROR: not a regular file")
fail = true
} else if is_write {
if err := runDoctorWritableDir(path); err != nil {
res = append(res, fmt.Sprintf(" ERROR: not writable: %v", err))
fail = true
}
}
}
// Note print paths inside quotes to make any leading/trailing spaces evident
check("Configuration File Path", setting.CustomConf, false, true, false)
check("Repository Root Path", setting.RepoRootPath, true, true, true)
check("Data Root Path", setting.AppDataPath, true, true, true)
check("Custom File Root Path", setting.CustomPath, true, false, false)
check("Work directory", setting.AppWorkPath, true, true, false)
check("Log Root Path", setting.LogRootPath, true, true, true)
if options.IsDynamic() {
// Do not check/report on StaticRootPath if data is embedded in Gitea (-tags bindata)
check("Static File Root Path", setting.StaticRootPath, true, true, false)
}
if fail {
return res, fmt.Errorf("please check your configuration file and try again")
}
return res, nil
}
func runDoctorWritableDir(path string) error {
// There's no platform-independent way of checking if a directory is writable
// https://stackoverflow.com/questions/20026320/how-to-tell-if-folder-exists-and-is-writable
tmpFile, err := ioutil.TempFile(path, "doctors-order")
if err != nil {
return err
}
if err := os.Remove(tmpFile.Name()); err != nil {
fmt.Printf("Warning: can't remove temporary file: '%s'\n", tmpFile.Name())
}
tmpFile.Close()
return nil
}
const tplCommentPrefix = `# gitea public key`
func runDoctorAuthorizedKeys(ctx *cli.Context) ([]string, error) {
if setting.SSH.StartBuiltinServer || !setting.SSH.CreateAuthorizedKeysFile {
return nil, nil
}
fPath := filepath.Join(setting.SSH.RootPath, "authorized_keys")
f, err := os.Open(fPath)
if err != nil {
if ctx.Bool("fix") {
return []string{fmt.Sprintf("Error whilst opening authorized_keys: %v. Attempting regeneration", err)}, models.RewriteAllPublicKeys()
}
return nil, err
}
defer f.Close()
linesInAuthorizedKeys := map[string]bool{}
scanner := bufio.NewScanner(f)
for scanner.Scan() {
line := scanner.Text()
if strings.HasPrefix(line, tplCommentPrefix) {
continue
}
linesInAuthorizedKeys[line] = true
}
f.Close()
// now we regenerate and check if there are any lines missing
regenerated := &bytes.Buffer{}
if err := models.RegeneratePublicKeys(regenerated); err != nil {
return nil, err
}
scanner = bufio.NewScanner(regenerated)
for scanner.Scan() {
line := scanner.Text()
if strings.HasPrefix(line, tplCommentPrefix) {
continue
}
if ok := linesInAuthorizedKeys[line]; ok {
continue
}
if ctx.Bool("fix") {
return []string{"authorized_keys is out of date, attempting regeneration"}, models.RewriteAllPublicKeys()
}
return nil, fmt.Errorf("authorized_keys is out of date and should be regenerated with gitea admin regenerate keys")
}
return nil, nil
}
func runDoctorCheckDBVersion(ctx *cli.Context) ([]string, error) {
if err := models.NewEngine(context.Background(), migrations.EnsureUpToDate); err != nil {
if ctx.Bool("fix") {
return []string{fmt.Sprintf("WARN: Got Error %v during ensure up to date", err), "Attempting to migrate to the latest DB version to fix this."}, models.NewEngine(context.Background(), migrations.Migrate)
}
return nil, err
}
return nil, nil
}
func iterateRepositories(each func(*models.Repository) ([]string, error)) ([]string, error) {
results := []string{}
err := models.Iterate(
models.DefaultDBContext(),
new(models.Repository),
builder.Gt{"id": 0},
func(idx int, bean interface{}) error {
res, err := each(bean.(*models.Repository))
results = append(results, res...)
return err
},
)
return results, err
}
func iteratePRs(repo *models.Repository, each func(*models.Repository, *models.PullRequest) ([]string, error)) ([]string, error) {
results := []string{}
err := models.Iterate(
models.DefaultDBContext(),
new(models.PullRequest),
builder.Eq{"base_repo_id": repo.ID},
func(idx int, bean interface{}) error {
res, err := each(repo, bean.(*models.PullRequest))
results = append(results, res...)
return err
},
)
return results, err
}
func runDoctorHooks(ctx *cli.Context) ([]string, error) {
// Need to iterate across all of the repositories
return iterateRepositories(func(repo *models.Repository) ([]string, error) {
results, err := models.CheckDelegateHooks(repo.RepoPath())
if err != nil {
return nil, err
}
if len(results) > 0 && ctx.Bool("fix") {
return []string{fmt.Sprintf("regenerated hooks for %s", repo.FullName())}, models.CreateDelegateHooks(repo.RepoPath())
}
return results, nil
})
}
func runDoctorPRMergeBase(ctx *cli.Context) ([]string, error) {
numRepos := 0
numPRs := 0
numPRsUpdated := 0
results, err := iterateRepositories(func(repo *models.Repository) ([]string, error) {
numRepos++
return iteratePRs(repo, func(repo *models.Repository, pr *models.PullRequest) ([]string, error) {
numPRs++
results := []string{}
pr.BaseRepo = repo
repoPath := repo.RepoPath()
oldMergeBase := pr.MergeBase
if !pr.HasMerged {
var err error
pr.MergeBase, err = git.NewCommand("merge-base", "--", pr.BaseBranch, pr.GetGitRefName()).RunInDir(repoPath)
if err != nil {
var err2 error
pr.MergeBase, err2 = git.NewCommand("rev-parse", git.BranchPrefix+pr.BaseBranch).RunInDir(repoPath)
if err2 != nil {
results = append(results, fmt.Sprintf("WARN: Unable to get merge base for PR ID %d, #%d onto %s in %s/%s", pr.ID, pr.Index, pr.BaseBranch, pr.BaseRepo.OwnerName, pr.BaseRepo.Name))
log.Error("Unable to get merge base for PR ID %d, Index %d in %s/%s. Error: %v & %v", pr.ID, pr.Index, pr.BaseRepo.OwnerName, pr.BaseRepo.Name, err, err2)
return results, nil
}
}
} else {
parentsString, err := git.NewCommand("rev-list", "--parents", "-n", "1", pr.MergedCommitID).RunInDir(repoPath)
if err != nil {
results = append(results, fmt.Sprintf("WARN: Unable to get parents for merged PR ID %d, #%d onto %s in %s/%s", pr.ID, pr.Index, pr.BaseBranch, pr.BaseRepo.OwnerName, pr.BaseRepo.Name))
log.Error("Unable to get parents for merged PR ID %d, Index %d in %s/%s. Error: %v", pr.ID, pr.Index, pr.BaseRepo.OwnerName, pr.BaseRepo.Name, err)
return results, nil
}
parents := strings.Split(strings.TrimSpace(parentsString), " ")
if len(parents) < 2 {
return results, nil
}
args := append([]string{"merge-base", "--"}, parents[1:]...)
args = append(args, pr.GetGitRefName())
pr.MergeBase, err = git.NewCommand(args...).RunInDir(repoPath)
if err != nil {
results = append(results, fmt.Sprintf("WARN: Unable to get merge base for merged PR ID %d, #%d onto %s in %s/%s", pr.ID, pr.Index, pr.BaseBranch, pr.BaseRepo.OwnerName, pr.BaseRepo.Name))
log.Error("Unable to get merge base for merged PR ID %d, Index %d in %s/%s. Error: %v", pr.ID, pr.Index, pr.BaseRepo.OwnerName, pr.BaseRepo.Name, err)
return results, nil
}
}
pr.MergeBase = strings.TrimSpace(pr.MergeBase)
if pr.MergeBase != oldMergeBase {
if ctx.Bool("fix") {
if err := pr.UpdateCols("merge_base"); err != nil {
return results, err
}
} else {
results = append(results, fmt.Sprintf("#%d onto %s in %s/%s: MergeBase should be %s but is %s", pr.Index, pr.BaseBranch, pr.BaseRepo.OwnerName, pr.BaseRepo.Name, oldMergeBase, pr.MergeBase))
}
numPRsUpdated++
}
return results, nil
})
})
if ctx.Bool("fix") {
results = append(results, fmt.Sprintf("%d PR mergebases updated of %d PRs total in %d repos", numPRsUpdated, numPRs, numRepos))
} else {
if numPRsUpdated > 0 && err == nil {
return results, fmt.Errorf("%d PRs with incorrect mergebases of %d PRs total in %d repos", numPRsUpdated, numPRs, numRepos)
}
results = append(results, fmt.Sprintf("%d PRs with incorrect mergebases of %d PRs total in %d repos", numPRsUpdated, numPRs, numRepos))
}
return results, err
}
func runDoctorScriptType(ctx *cli.Context) ([]string, error) {
path, err := exec.LookPath(setting.ScriptType)
if err != nil {
return []string{fmt.Sprintf("ScriptType %s is not on the current PATH", setting.ScriptType)}, err
}
return []string{fmt.Sprintf("ScriptType %s is on the current PATH at %s", setting.ScriptType, path)}, nil
}

View File

@@ -19,6 +19,7 @@ import (
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/private"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/util"
"github.com/urfave/cli"
)
@@ -113,15 +114,8 @@ func (d *delayWriter) Close() error {
if d == nil {
return nil
}
stopped := d.timer.Stop()
if stopped {
return nil
}
select {
case <-d.timer.C:
default:
}
if d.buf == nil {
stopped := util.StopTimer(d.timer)
if stopped || d.buf == nil {
return nil
}
_, err := d.internal.Write(d.buf.Bytes())

View File

@@ -18,7 +18,7 @@ params:
description: Git with a cup of tea
author: The Gitea Authors
website: https://docs.gitea.io
version: 1.11.2
version: 1.11.5
outputs:
home:

View File

@@ -60,17 +60,17 @@ func TestAPIDeleteTrackedTime(t *testing.T) {
//Deletion not allowed
req := NewRequestf(t, "DELETE", "/api/v1/repos/%s/%s/issues/%d/times/%d?token=%s", user2.Name, issue2.Repo.Name, issue2.Index, time6.ID, token)
session.MakeRequest(t, req, http.StatusForbidden)
/* Delete own time <-- ToDo: timout without reason
time3 := models.AssertExistsAndLoadBean(t, &models.TrackedTime{ID: 3}).(*models.TrackedTime)
req = NewRequestf(t, "DELETE", "/api/v1/repos/%s/%s/issues/%d/times/%d?token=%s", user2.Name, issue2.Repo.Name, issue2.Index, time3.ID, token)
session.MakeRequest(t, req, http.StatusNoContent)
//Delete non existing time
session.MakeRequest(t, req, http.StatusInternalServerError) */
session.MakeRequest(t, req, http.StatusNotFound)
//Reset time of user 2 on issue 2
trackedSeconds, err := models.GetTrackedSeconds(models.FindTrackedTimesOptions{IssueID: 2, UserID: 2})
assert.NoError(t, err)
assert.Equal(t, int64(3662), trackedSeconds)
assert.Equal(t, int64(3661), trackedSeconds)
req = NewRequestf(t, "DELETE", "/api/v1/repos/%s/%s/issues/%d/times?token=%s", user2.Name, issue2.Repo.Name, issue2.Index, token)
session.MakeRequest(t, req, http.StatusNoContent)

View File

@@ -209,13 +209,31 @@ func getRepo(t *testing.T, repoID int64) *models.Repository {
func TestAPIViewRepo(t *testing.T) {
defer prepareTestEnv(t)()
var repo api.Repository
req := NewRequest(t, "GET", "/api/v1/repos/user2/repo1")
resp := MakeRequest(t, req, http.StatusOK)
var repo api.Repository
DecodeJSON(t, resp, &repo)
assert.EqualValues(t, 1, repo.ID)
assert.EqualValues(t, "repo1", repo.Name)
assert.EqualValues(t, 1, repo.Releases)
assert.EqualValues(t, 1, repo.OpenIssues)
assert.EqualValues(t, 2, repo.OpenPulls)
req = NewRequest(t, "GET", "/api/v1/repos/user12/repo10")
resp = MakeRequest(t, req, http.StatusOK)
DecodeJSON(t, resp, &repo)
assert.EqualValues(t, 10, repo.ID)
assert.EqualValues(t, "repo10", repo.Name)
assert.EqualValues(t, 1, repo.OpenPulls)
assert.EqualValues(t, 1, repo.Forks)
req = NewRequest(t, "GET", "/api/v1/repos/user5/repo4")
resp = MakeRequest(t, req, http.StatusOK)
DecodeJSON(t, resp, &repo)
assert.EqualValues(t, 4, repo.ID)
assert.EqualValues(t, "repo4", repo.Name)
assert.EqualValues(t, 1, repo.Stars)
}
func TestAPIOrgRepos(t *testing.T) {

View File

@@ -71,7 +71,6 @@ func testGit(t *testing.T, u *url.URL) {
t.Run("BranchProtectMerge", doBranchProtectPRMerge(&httpContext, dstPath))
t.Run("MergeFork", func(t *testing.T) {
t.Run("CreatePRAndMerge", doMergeFork(httpContext, forkedUserCtx, "master", httpContext.Username+":master"))
t.Run("DeleteRepository", doAPIDeleteRepository(httpContext))
rawTest(t, &forkedUserCtx, little, big, littleLFS, bigLFS)
mediaTest(t, &forkedUserCtx, little, big, littleLFS, bigLFS)
})
@@ -111,7 +110,6 @@ func testGit(t *testing.T, u *url.URL) {
t.Run("BranchProtectMerge", doBranchProtectPRMerge(&sshContext, dstPath))
t.Run("MergeFork", func(t *testing.T) {
t.Run("CreatePRAndMerge", doMergeFork(sshContext, forkedUserCtx, "master", sshContext.Username+":master"))
t.Run("DeleteRepository", doAPIDeleteRepository(sshContext))
rawTest(t, &forkedUserCtx, little, big, littleLFS, bigLFS)
mediaTest(t, &forkedUserCtx, little, big, littleLFS, bigLFS)
})
@@ -419,8 +417,62 @@ func doMergeFork(ctx, baseCtx APITestContext, baseBranch, headBranch string) fun
pr, err = doAPICreatePullRequest(ctx, baseCtx.Username, baseCtx.Reponame, baseBranch, headBranch)(t)
assert.NoError(t, err)
})
t.Run("EnsureCanSeePull", func(t *testing.T) {
req := NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
ctx.Session.MakeRequest(t, req, http.StatusOK)
req = NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d/files", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
ctx.Session.MakeRequest(t, req, http.StatusOK)
req = NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d/commits", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
ctx.Session.MakeRequest(t, req, http.StatusOK)
})
var diffStr string
t.Run("GetDiff", func(t *testing.T) {
req := NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d.diff", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
resp := ctx.Session.MakeRequest(t, req, http.StatusOK)
diffStr = resp.Body.String()
})
t.Run("MergePR", doAPIMergePullRequest(baseCtx, baseCtx.Username, baseCtx.Reponame, pr.Index))
t.Run("EnsureCanSeePull", func(t *testing.T) {
req := NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
ctx.Session.MakeRequest(t, req, http.StatusOK)
req = NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d/files", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
ctx.Session.MakeRequest(t, req, http.StatusOK)
req = NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d/commits", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
ctx.Session.MakeRequest(t, req, http.StatusOK)
})
t.Run("EnsureDiffNoChange", func(t *testing.T) {
req := NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d.diff", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
resp := ctx.Session.MakeRequest(t, req, http.StatusOK)
assert.Equal(t, diffStr, resp.Body.String())
})
t.Run("DeleteHeadBranch", doBranchDelete(baseCtx, baseCtx.Username, baseCtx.Reponame, headBranch))
t.Run("EnsureCanSeePull", func(t *testing.T) {
req := NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
ctx.Session.MakeRequest(t, req, http.StatusOK)
req = NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d/files", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
ctx.Session.MakeRequest(t, req, http.StatusOK)
req = NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d/commits", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
ctx.Session.MakeRequest(t, req, http.StatusOK)
})
t.Run("EnsureDiffNoChange", func(t *testing.T) {
req := NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d.diff", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
resp := ctx.Session.MakeRequest(t, req, http.StatusOK)
assert.Equal(t, diffStr, resp.Body.String())
})
t.Run("DeleteRepository", doAPIDeleteRepository(ctx))
t.Run("EnsureCanSeePull", func(t *testing.T) {
req := NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
ctx.Session.MakeRequest(t, req, http.StatusOK)
req = NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d/files", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
ctx.Session.MakeRequest(t, req, http.StatusOK)
req = NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d/commits", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
ctx.Session.MakeRequest(t, req, http.StatusOK)
})
t.Run("EnsureDiffNoChange", func(t *testing.T) {
req := NewRequest(t, "GET", fmt.Sprintf("/%s/%s/pulls/%d.diff", url.PathEscape(baseCtx.Username), url.PathEscape(baseCtx.Reponame), pr.Index))
resp := ctx.Session.MakeRequest(t, req, http.StatusOK)
assert.Equal(t, diffStr, resp.Body.String())
})
}
}
@@ -493,3 +545,14 @@ func doPushCreate(ctx APITestContext, u *url.URL) func(t *testing.T) {
assert.True(t, repo.IsPrivate)
}
}
func doBranchDelete(ctx APITestContext, owner, repo, branch string) func(*testing.T) {
return func(t *testing.T) {
csrf := GetCSRF(t, ctx.Session, fmt.Sprintf("/%s/%s/branches", url.PathEscape(owner), url.PathEscape(repo)))
req := NewRequestWithValues(t, "POST", fmt.Sprintf("/%s/%s/branches/delete?name=%s", url.PathEscape(owner), url.PathEscape(repo), url.QueryEscape(branch)), map[string]string{
"_csrf": csrf,
})
ctx.Session.MakeRequest(t, req, http.StatusOK)
}
}

View File

@@ -68,6 +68,7 @@ arguments - which can alternatively be run by running the subcommand web.`
cmd.CmdMigrate,
cmd.CmdKeys,
cmd.CmdConvert,
cmd.CmdDoctor,
}
// Now adjust these commands to add our global configuration options

View File

@@ -4,6 +4,11 @@
package models
import (
"code.gitea.io/gitea/modules/setting"
"xorm.io/builder"
)
// DBContext represents a db context
type DBContext struct {
e Engine
@@ -53,3 +58,10 @@ func WithTx(f func(ctx DBContext) error) error {
sess.Close()
return err
}
// Iterate iterates the databases and doing something
func Iterate(ctx DBContext, tableBean interface{}, cond builder.Cond, fun func(idx int, bean interface{}) error) error {
return ctx.e.Where(cond).
BufferSize(setting.Database.IterateBufferSize).
Iterate(tableBean, fun)
}

View File

@@ -74,8 +74,8 @@ var (
issueTasksDonePat *regexp.Regexp
)
const issueTasksRegexpStr = `(^\s*[-*]\s\[[\sx]\]\s.)|(\n\s*[-*]\s\[[\sx]\]\s.)`
const issueTasksDoneRegexpStr = `(^\s*[-*]\s\[[x]\]\s.)|(\n\s*[-*]\s\[[x]\]\s.)`
const issueTasksRegexpStr = `(^\s*[-*]\s\[[\sxX]\]\s.)|(\n\s*[-*]\s\[[\sxX]\]\s.)`
const issueTasksDoneRegexpStr = `(^\s*[-*]\s\[[xX]\]\s.)|(\n\s*[-*]\s\[[xX]\]\s.)`
const issueMaxDupIndexAttempts = 3
func init() {

View File

@@ -273,6 +273,10 @@ func DeleteTime(t *TrackedTime) error {
return err
}
if err := t.loadAttributes(sess); err != nil {
return err
}
if err := deleteTime(sess, t); err != nil {
return err
}
@@ -312,10 +316,8 @@ func deleteTime(e Engine, t *TrackedTime) error {
// GetTrackedTimeByID returns raw TrackedTime without loading attributes by id
func GetTrackedTimeByID(id int64) (*TrackedTime, error) {
time := &TrackedTime{
ID: id,
}
has, err := x.Get(time)
time := new(TrackedTime)
has, err := x.ID(id).Get(time)
if err != nil {
return nil, err
} else if !has {

View File

@@ -292,6 +292,52 @@ var migrations = []Migration{
NewMigration("Add block on rejected reviews branch protection", addBlockOnRejectedReviews),
}
// GetCurrentDBVersion returns the current db version
func GetCurrentDBVersion(x *xorm.Engine) (int64, error) {
if err := x.Sync(new(Version)); err != nil {
return -1, fmt.Errorf("sync: %v", err)
}
currentVersion := &Version{ID: 1}
has, err := x.Get(currentVersion)
if err != nil {
return -1, fmt.Errorf("get: %v", err)
}
if !has {
return -1, nil
}
return currentVersion.Version, nil
}
// ExpectedVersion returns the expected db version
func ExpectedVersion() int64 {
return int64(minDBVersion + len(migrations))
}
// EnsureUpToDate will check if the db is at the correct version
func EnsureUpToDate(x *xorm.Engine) error {
currentDB, err := GetCurrentDBVersion(x)
if err != nil {
return err
}
if currentDB < 0 {
return fmt.Errorf("Database has not been initialised")
}
if minDBVersion > currentDB {
return fmt.Errorf("DB version %d (<= %d) is too old for auto-migration. Upgrade to Gitea 1.6.4 first then upgrade to this version", currentDB, minDBVersion)
}
expected := ExpectedVersion()
if currentDB != expected {
return fmt.Errorf(`Current database version %d is not equal to the expected version %d. Please run "gitea [--config /path/to/app.ini] migrate" to update the database version`, currentDB, expected)
}
return nil
}
// Migrate database to current version
func Migrate(x *xorm.Engine) error {
if err := x.Sync(new(Version)); err != nil {

View File

@@ -470,12 +470,12 @@ func GetOwnedOrgsByUserIDDesc(userID int64, desc string) ([]*User, error) {
func GetOrgsCanCreateRepoByUserID(userID int64) ([]*User, error) {
orgs := make([]*User, 0, 10)
return orgs, x.Join("INNER", "`team_user`", "`team_user`.org_id=`user`.id").
Join("INNER", "`team`", "`team`.id=`team_user`.team_id").
Where("`team_user`.uid=?", userID).
And(builder.Eq{"`team`.authorize": AccessModeOwner}.Or(builder.Eq{"`team`.can_create_org_repo": true})).
Desc("`user`.updated_unix").
Find(&orgs)
return orgs, x.Where(builder.In("id", builder.Select("`user`.id").From("`user`").
Join("INNER", "`team_user`", "`team_user`.org_id = `user`.id").
Join("INNER", "`team`", "`team`.id = `team_user`.team_id").
Where(builder.Eq{"`team_user`.uid": userID}).
And(builder.Eq{"`team`.authorize": AccessModeOwner}.Or(builder.Eq{"`team`.can_create_org_repo": true})))).
Desc("`user`.updated_unix").Find(&orgs)
}
// GetOrgUsersByUserID returns all organization-user relations by user ID.

View File

@@ -173,7 +173,6 @@ type Repository struct {
NumMilestones int `xorm:"NOT NULL DEFAULT 0"`
NumClosedMilestones int `xorm:"NOT NULL DEFAULT 0"`
NumOpenMilestones int `xorm:"-"`
NumReleases int `xorm:"-"`
IsPrivate bool `xorm:"INDEX"`
IsEmpty bool `xorm:"INDEX"`
@@ -364,6 +363,8 @@ func (repo *Repository) innerAPIFormat(e Engine, mode AccessMode, isParent bool)
allowSquash = config.AllowSquash
}
numReleases, _ := GetReleaseCountByRepoID(repo.ID, FindReleasesOptions{IncludeDrafts: false, IncludeTags: true})
return &api.Repository{
ID: repo.ID,
Owner: repo.Owner.APIFormat(),
@@ -387,7 +388,7 @@ func (repo *Repository) innerAPIFormat(e Engine, mode AccessMode, isParent bool)
Watchers: repo.NumWatches,
OpenIssues: repo.NumOpenIssues,
OpenPulls: repo.NumOpenPulls,
Releases: repo.NumReleases,
Releases: int(numReleases),
DefaultBranch: repo.DefaultBranch,
Created: repo.CreatedUnix.AsTime(),
Updated: repo.UpdatedUnix.AsTime(),
@@ -968,6 +969,21 @@ func CheckCreateRepository(doer, u *User, name string) error {
return nil
}
func getHookTemplates() (hookNames, hookTpls, giteaHookTpls []string) {
hookNames = []string{"pre-receive", "update", "post-receive"}
hookTpls = []string{
fmt.Sprintf("#!/usr/bin/env %s\ndata=$(cat)\nexitcodes=\"\"\nhookname=$(basename $0)\nGIT_DIR=${GIT_DIR:-$(dirname $0)}\n\nfor hook in ${GIT_DIR}/hooks/${hookname}.d/*; do\ntest -x \"${hook}\" && test -f \"${hook}\" || continue\necho \"${data}\" | \"${hook}\"\nexitcodes=\"${exitcodes} $?\"\ndone\n\nfor i in ${exitcodes}; do\n[ ${i} -eq 0 ] || exit ${i}\ndone\n", setting.ScriptType),
fmt.Sprintf("#!/usr/bin/env %s\nexitcodes=\"\"\nhookname=$(basename $0)\nGIT_DIR=${GIT_DIR:-$(dirname $0)}\n\nfor hook in ${GIT_DIR}/hooks/${hookname}.d/*; do\ntest -x \"${hook}\" && test -f \"${hook}\" || continue\n\"${hook}\" $1 $2 $3\nexitcodes=\"${exitcodes} $?\"\ndone\n\nfor i in ${exitcodes}; do\n[ ${i} -eq 0 ] || exit ${i}\ndone\n", setting.ScriptType),
fmt.Sprintf("#!/usr/bin/env %s\ndata=$(cat)\nexitcodes=\"\"\nhookname=$(basename $0)\nGIT_DIR=${GIT_DIR:-$(dirname $0)}\n\nfor hook in ${GIT_DIR}/hooks/${hookname}.d/*; do\ntest -x \"${hook}\" && test -f \"${hook}\" || continue\necho \"${data}\" | \"${hook}\"\nexitcodes=\"${exitcodes} $?\"\ndone\n\nfor i in ${exitcodes}; do\n[ ${i} -eq 0 ] || exit ${i}\ndone\n", setting.ScriptType),
}
giteaHookTpls = []string{
fmt.Sprintf("#!/usr/bin/env %s\n\"%s\" hook --config='%s' pre-receive\n", setting.ScriptType, setting.AppPath, setting.CustomConf),
fmt.Sprintf("#!/usr/bin/env %s\n\"%s\" hook --config='%s' update $1 $2 $3\n", setting.ScriptType, setting.AppPath, setting.CustomConf),
fmt.Sprintf("#!/usr/bin/env %s\n\"%s\" hook --config='%s' post-receive\n", setting.ScriptType, setting.AppPath, setting.CustomConf),
}
return
}
// CreateDelegateHooks creates all the hooks scripts for the repo
func CreateDelegateHooks(repoPath string) error {
return createDelegateHooks(repoPath)
@@ -975,21 +991,7 @@ func CreateDelegateHooks(repoPath string) error {
// createDelegateHooks creates all the hooks scripts for the repo
func createDelegateHooks(repoPath string) (err error) {
var (
hookNames = []string{"pre-receive", "update", "post-receive"}
hookTpls = []string{
fmt.Sprintf("#!/usr/bin/env %s\ndata=$(cat)\nexitcodes=\"\"\nhookname=$(basename $0)\nGIT_DIR=${GIT_DIR:-$(dirname $0)}\n\nfor hook in ${GIT_DIR}/hooks/${hookname}.d/*; do\ntest -x \"${hook}\" || continue\necho \"${data}\" | \"${hook}\"\nexitcodes=\"${exitcodes} $?\"\ndone\n\nfor i in ${exitcodes}; do\n[ ${i} -eq 0 ] || exit ${i}\ndone\n", setting.ScriptType),
fmt.Sprintf("#!/usr/bin/env %s\nexitcodes=\"\"\nhookname=$(basename $0)\nGIT_DIR=${GIT_DIR:-$(dirname $0)}\n\nfor hook in ${GIT_DIR}/hooks/${hookname}.d/*; do\ntest -x \"${hook}\" || continue\n\"${hook}\" $1 $2 $3\nexitcodes=\"${exitcodes} $?\"\ndone\n\nfor i in ${exitcodes}; do\n[ ${i} -eq 0 ] || exit ${i}\ndone\n", setting.ScriptType),
fmt.Sprintf("#!/usr/bin/env %s\ndata=$(cat)\nexitcodes=\"\"\nhookname=$(basename $0)\nGIT_DIR=${GIT_DIR:-$(dirname $0)}\n\nfor hook in ${GIT_DIR}/hooks/${hookname}.d/*; do\ntest -x \"${hook}\" || continue\necho \"${data}\" | \"${hook}\"\nexitcodes=\"${exitcodes} $?\"\ndone\n\nfor i in ${exitcodes}; do\n[ ${i} -eq 0 ] || exit ${i}\ndone\n", setting.ScriptType),
}
giteaHookTpls = []string{
fmt.Sprintf("#!/usr/bin/env %s\n\"%s\" hook --config='%s' pre-receive\n", setting.ScriptType, setting.AppPath, setting.CustomConf),
fmt.Sprintf("#!/usr/bin/env %s\n\"%s\" hook --config='%s' update $1 $2 $3\n", setting.ScriptType, setting.AppPath, setting.CustomConf),
fmt.Sprintf("#!/usr/bin/env %s\n\"%s\" hook --config='%s' post-receive\n", setting.ScriptType, setting.AppPath, setting.CustomConf),
}
)
hookNames, hookTpls, giteaHookTpls := getHookTemplates()
hookDir := filepath.Join(repoPath, "hooks")
for i, hookName := range hookNames {
@@ -1008,16 +1010,94 @@ func createDelegateHooks(repoPath string) (err error) {
return fmt.Errorf("write old hook file '%s': %v", oldHookPath, err)
}
if err = ensureExecutable(oldHookPath); err != nil {
return fmt.Errorf("Unable to set %s executable. Error %v", oldHookPath, err)
}
if err = os.Remove(newHookPath); err != nil && !os.IsNotExist(err) {
return fmt.Errorf("unable to pre-remove new hook file '%s' prior to rewriting: %v", newHookPath, err)
}
if err = ioutil.WriteFile(newHookPath, []byte(giteaHookTpls[i]), 0777); err != nil {
return fmt.Errorf("write new hook file '%s': %v", newHookPath, err)
}
if err = ensureExecutable(newHookPath); err != nil {
return fmt.Errorf("Unable to set %s executable. Error %v", oldHookPath, err)
}
}
return nil
}
func checkExecutable(filename string) bool {
fileInfo, err := os.Stat(filename)
if err != nil {
return false
}
return (fileInfo.Mode() & 0100) > 0
}
func ensureExecutable(filename string) error {
fileInfo, err := os.Stat(filename)
if err != nil {
return err
}
if (fileInfo.Mode() & 0100) > 0 {
return nil
}
mode := fileInfo.Mode() | 0100
return os.Chmod(filename, mode)
}
// CheckDelegateHooks checks the hooks scripts for the repo
func CheckDelegateHooks(repoPath string) ([]string, error) {
hookNames, hookTpls, giteaHookTpls := getHookTemplates()
hookDir := filepath.Join(repoPath, "hooks")
results := make([]string, 0, 10)
for i, hookName := range hookNames {
oldHookPath := filepath.Join(hookDir, hookName)
newHookPath := filepath.Join(hookDir, hookName+".d", "gitea")
cont := false
if !com.IsExist(oldHookPath) {
results = append(results, fmt.Sprintf("old hook file %s does not exist", oldHookPath))
cont = true
}
if !com.IsExist(oldHookPath + ".d") {
results = append(results, fmt.Sprintf("hooks directory %s does not exist", oldHookPath+".d"))
cont = true
}
if !com.IsExist(newHookPath) {
results = append(results, fmt.Sprintf("new hook file %s does not exist", newHookPath))
cont = true
}
if cont {
continue
}
contents, err := ioutil.ReadFile(oldHookPath)
if err != nil {
return results, err
}
if string(contents) != hookTpls[i] {
results = append(results, fmt.Sprintf("old hook file %s is out of date", oldHookPath))
}
if !checkExecutable(oldHookPath) {
results = append(results, fmt.Sprintf("old hook file %s is not executable", oldHookPath))
}
contents, err = ioutil.ReadFile(newHookPath)
if err != nil {
return results, err
}
if string(contents) != giteaHookTpls[i] {
results = append(results, fmt.Sprintf("new hook file %s is out of date", newHookPath))
}
if !checkExecutable(newHookPath) {
results = append(results, fmt.Sprintf("new hook file %s is not executable", newHookPath))
}
}
return results, nil
}
// initRepoCommit temporarily changes with work directory.
func initRepoCommit(tmpPath string, repo *Repository, u *User) (err error) {
@@ -1888,6 +1968,11 @@ func DeleteRepository(doer *User, uid, repoID int64) error {
return err
}
if _, err = sess.In("issue_id", deleteCond).
Delete(&TrackedTime{}); err != nil {
return err
}
attachments = attachments[:0]
if err = sess.Join("INNER", "issue", "issue.id = attachment.issue_id").
Where("issue.repo_id = ?", repoID).

View File

@@ -15,6 +15,7 @@ import (
"encoding/pem"
"errors"
"fmt"
"io"
"io/ioutil"
"math/big"
"os"
@@ -687,14 +688,29 @@ func rewriteAllPublicKeys(e Engine) error {
}
}
err = e.Iterate(new(PublicKey), func(idx int, bean interface{}) (err error) {
_, err = t.WriteString((bean.(*PublicKey)).AuthorizedString())
if err := regeneratePublicKeys(e, t); err != nil {
return err
}
t.Close()
return os.Rename(tmpPath, fPath)
}
// RegeneratePublicKeys regenerates the authorized_keys file
func RegeneratePublicKeys(t io.Writer) error {
return regeneratePublicKeys(x, t)
}
func regeneratePublicKeys(e Engine, t io.Writer) error {
err := e.Iterate(new(PublicKey), func(idx int, bean interface{}) (err error) {
_, err = t.Write([]byte((bean.(*PublicKey)).AuthorizedString()))
return err
})
if err != nil {
return err
}
fPath := filepath.Join(setting.SSH.RootPath, "authorized_keys")
if com.IsExist(fPath) {
f, err := os.Open(fPath)
if err != nil {
@@ -707,7 +723,7 @@ func rewriteAllPublicKeys(e Engine) error {
scanner.Scan()
continue
}
_, err = t.WriteString(line + "\n")
_, err = t.Write([]byte(line + "\n"))
if err != nil {
f.Close()
return err
@@ -715,9 +731,7 @@ func rewriteAllPublicKeys(e Engine) error {
}
f.Close()
}
t.Close()
return os.Rename(tmpPath, fPath)
return nil
}
// ________ .__ ____ __.

View File

@@ -212,6 +212,11 @@ func (ctx *APIContext) NotFound(objs ...interface{}) {
var message = "Not Found"
var errors []string
for _, obj := range objs {
// Ignore nil
if obj == nil {
continue
}
if err, ok := obj.(error); ok {
errors = append(errors, err.Error())
} else {

View File

@@ -396,7 +396,7 @@ func RepoAssignment() macaron.Handler {
ctx.Data["RepoExternalIssuesLink"] = unit.ExternalTrackerConfig().ExternalTrackerURL
}
count, err := models.GetReleaseCountByRepoID(ctx.Repo.Repository.ID, models.FindReleasesOptions{
ctx.Data["NumReleases"], err = models.GetReleaseCountByRepoID(ctx.Repo.Repository.ID, models.FindReleasesOptions{
IncludeDrafts: false,
IncludeTags: true,
})
@@ -404,7 +404,6 @@ func RepoAssignment() macaron.Handler {
ctx.ServerError("GetReleaseCountByRepoID", err)
return
}
ctx.Repo.Repository.NumReleases = int(count)
ctx.Data["Title"] = owner.Name + "/" + repo.Name
ctx.Data["Repository"] = repo

View File

@@ -9,6 +9,7 @@ import (
"fmt"
"net"
"net/url"
"path"
"regexp"
"strings"
)
@@ -38,7 +39,7 @@ func NewSubModuleFile(c *Commit, refURL, refID string) *SubModuleFile {
}
}
func getRefURL(refURL, urlPrefix, parentPath string) string {
func getRefURL(refURL, urlPrefix, repoFullName string) string {
if refURL == "" {
return ""
}
@@ -51,14 +52,14 @@ func getRefURL(refURL, urlPrefix, parentPath string) string {
urlPrefixHostname = prefixURL.Host
}
if strings.HasSuffix(urlPrefix, "/") {
urlPrefix = urlPrefix[:len(urlPrefix)-1]
}
// FIXME: Need to consider branch - which will require changes in modules/git/commit.go:GetSubModules
// Relative url prefix check (according to git submodule documentation)
if strings.HasPrefix(refURI, "./") || strings.HasPrefix(refURI, "../") {
// ...construct and return correct submodule url here...
idx := strings.Index(parentPath, "/src/")
if idx == -1 {
return refURI
}
return strings.TrimSuffix(urlPrefix, "/") + parentPath[:idx] + "/" + refURI
return urlPrefix + path.Clean(path.Join("/", repoFullName, refURI))
}
if !strings.Contains(refURI, "://") {
@@ -69,16 +70,16 @@ func getRefURL(refURL, urlPrefix, parentPath string) string {
m := match[0]
refHostname := m[2]
path := m[3]
pth := m[3]
if !strings.HasPrefix(path, "/") {
path = "/" + path
if !strings.HasPrefix(pth, "/") {
pth = "/" + pth
}
if urlPrefixHostname == refHostname {
return prefixURL.Scheme + "://" + urlPrefixHostname + path
return urlPrefix + path.Clean(path.Join("/", pth))
}
return "http://" + refHostname + path
return "http://" + refHostname + pth
}
}
@@ -97,7 +98,7 @@ func getRefURL(refURL, urlPrefix, parentPath string) string {
for _, scheme := range supportedSchemes {
if ref.Scheme == scheme {
if urlPrefixHostname == refHostname {
return prefixURL.Scheme + "://" + prefixURL.Host + ref.Path
return urlPrefix + path.Clean(path.Join("/", ref.Path))
} else if ref.Scheme == "http" || ref.Scheme == "https" {
if len(ref.User.Username()) > 0 {
return ref.Scheme + "://" + fmt.Sprintf("%v", ref.User) + "@" + ref.Host + ref.Path
@@ -113,8 +114,8 @@ func getRefURL(refURL, urlPrefix, parentPath string) string {
}
// RefURL guesses and returns reference URL.
func (sf *SubModuleFile) RefURL(urlPrefix string, parentPath string) string {
return getRefURL(sf.refURL, urlPrefix, parentPath)
func (sf *SubModuleFile) RefURL(urlPrefix string, repoFullName string) string {
return getRefURL(sf.refURL, urlPrefix, repoFullName)
}
// RefID returns reference ID.

View File

@@ -17,21 +17,21 @@ func TestGetRefURL(t *testing.T) {
parentPath string
expect string
}{
{"git://github.com/user1/repo1", "/", "/", "http://github.com/user1/repo1"},
{"https://localhost/user1/repo1.git", "/", "/", "https://localhost/user1/repo1"},
{"http://localhost/user1/repo1.git", "/", "/", "http://localhost/user1/repo1"},
{"git@github.com:user1/repo1.git", "/", "/", "http://github.com/user1/repo1"},
{"ssh://git@git.zefie.net:2222/zefie/lge_g6_kernel_scripts.git", "/", "/", "http://git.zefie.net/zefie/lge_g6_kernel_scripts"},
{"git@git.zefie.net:2222/zefie/lge_g6_kernel_scripts.git", "/", "/", "http://git.zefie.net/2222/zefie/lge_g6_kernel_scripts"},
{"git@try.gitea.io:go-gitea/gitea", "https://try.gitea.io/go-gitea/gitea", "/", "https://try.gitea.io/go-gitea/gitea"},
{"ssh://git@try.gitea.io:9999/go-gitea/gitea", "https://try.gitea.io/go-gitea/gitea", "/", "https://try.gitea.io/go-gitea/gitea"},
{"git://git@try.gitea.io:9999/go-gitea/gitea", "https://try.gitea.io/go-gitea/log", "/", "https://try.gitea.io/go-gitea/gitea"},
{"ssh://git@127.0.0.1:9999/go-gitea/gitea", "https://127.0.0.1:3000/go-gitea/log", "/", "https://127.0.0.1:3000/go-gitea/gitea"},
{"https://gitea.com:3000/user1/repo1.git", "https://127.0.0.1:3000/go-gitea/gitea", "/", "https://gitea.com:3000/user1/repo1"},
{"https://username:password@github.com/username/repository.git", "/", "/", "https://username:password@github.com/username/repository"},
{"git://github.com/user1/repo1", "/", "user1/repo2", "http://github.com/user1/repo1"},
{"https://localhost/user1/repo1.git", "/", "user1/repo2", "https://localhost/user1/repo1"},
{"http://localhost/user1/repo1.git", "/", "owner/reponame", "http://localhost/user1/repo1"},
{"git@github.com:user1/repo1.git", "/", "owner/reponame", "http://github.com/user1/repo1"},
{"ssh://git@git.zefie.net:2222/zefie/lge_g6_kernel_scripts.git", "/", "zefie/lge_g6_kernel", "http://git.zefie.net/zefie/lge_g6_kernel_scripts"},
{"git@git.zefie.net:2222/zefie/lge_g6_kernel_scripts.git", "/", "zefie/lge_g6_kernel", "http://git.zefie.net/2222/zefie/lge_g6_kernel_scripts"},
{"git@try.gitea.io:go-gitea/gitea", "https://try.gitea.io/", "go-gitea/sdk", "https://try.gitea.io/go-gitea/gitea"},
{"ssh://git@try.gitea.io:9999/go-gitea/gitea", "https://try.gitea.io/", "go-gitea/sdk", "https://try.gitea.io/go-gitea/gitea"},
{"git://git@try.gitea.io:9999/go-gitea/gitea", "https://try.gitea.io/", "go-gitea/sdk", "https://try.gitea.io/go-gitea/gitea"},
{"ssh://git@127.0.0.1:9999/go-gitea/gitea", "https://127.0.0.1:3000/", "go-gitea/sdk", "https://127.0.0.1:3000/go-gitea/gitea"},
{"https://gitea.com:3000/user1/repo1.git", "https://127.0.0.1:3000/", "user/repo2", "https://gitea.com:3000/user1/repo1"},
{"https://username:password@github.com/username/repository.git", "/", "username/repository2", "https://username:password@github.com/username/repository"},
{"somethingbad", "https://127.0.0.1:3000/go-gitea/gitea", "/", ""},
{"git@localhost:user/repo", "https://localhost/user/repo2", "/", "https://localhost/user/repo"},
{"../path/to/repo.git/", "https://localhost/user/repo2/src/branch/master/test", "/", "../path/to/repo.git/"},
{"git@localhost:user/repo", "https://localhost/", "user2/repo1", "https://localhost/user/repo"},
{"../path/to/repo.git/", "https://localhost/", "user/repo2", "https://localhost/user/path/to/repo.git"},
}
for _, kase := range kases {

View File

@@ -16,6 +16,7 @@ import (
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/migrations/base"
"code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/util"
"github.com/google/go-github/v24/github"
"golang.org/x/oauth2"
@@ -121,7 +122,7 @@ func (g *GithubDownloaderV3) sleep() {
timer := time.NewTimer(time.Until(g.rate.Reset.Time))
select {
case <-g.ctx.Done():
timer.Stop()
util.StopTimer(timer)
return
case <-timer.C:
}

View File

@@ -124,6 +124,12 @@ func (r *indexerNotifier) NotifyPushCommits(pusher *models.User, repo *models.Re
}
}
func (r *indexerNotifier) NotifySyncPushCommits(pusher *models.User, repo *models.Repository, refName, oldCommitID, newCommitID string, commits *models.PushCommits) {
if setting.Indexer.RepoIndexerEnabled && refName == git.BranchPrefix+repo.DefaultBranch {
code_indexer.UpdateRepoIndexer(repo)
}
}
func (r *indexerNotifier) NotifyIssueChangeContent(doer *models.User, issue *models.Issue, oldContent string) {
issue_indexer.UpdateIssueIndexer(issue)
}

View File

@@ -98,3 +98,8 @@ func fileFromDir(name string) ([]byte, error) {
return []byte{}, fmt.Errorf("Asset file does not exist: %s", name)
}
// IsDynamic will return false when using embedded data (-tags bindata)
func IsDynamic() bool {
return true
}

View File

@@ -112,3 +112,8 @@ func fileFromDir(name string) ([]byte, error) {
return ioutil.ReadAll(f)
}
// IsDynamic will return false when using embedded data (-tags bindata)
func IsDynamic() bool {
return false
}

View File

@@ -12,6 +12,7 @@ import (
"time"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/util"
)
// WrappedQueueType is the type for a wrapped delayed starting queue
@@ -77,7 +78,7 @@ func (q *delayedStarter) setInternal(atShutdown func(context.Context, func()), h
t := time.NewTimer(sleepTime)
select {
case <-ctx.Done():
t.Stop()
util.StopTimer(t)
case <-t.C:
}
}

View File

@@ -10,6 +10,7 @@ import (
"time"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/util"
)
// WorkerPool takes
@@ -56,12 +57,7 @@ func (p *WorkerPool) pushBoost(data Data) {
p.lock.Unlock()
select {
case p.dataChan <- data:
if timer.Stop() {
select {
case <-timer.C:
default:
}
}
util.StopTimer(timer)
case <-timer.C:
p.lock.Lock()
if p.blockTimeout > ourTimeout || (p.numberOfWorkers > p.maxNumberOfWorkers && p.maxNumberOfWorkers >= 0) {
@@ -277,12 +273,7 @@ func (p *WorkerPool) doWork(ctx context.Context) {
timer := time.NewTimer(delay)
select {
case <-ctx.Done():
if timer.Stop() {
select {
case <-timer.C:
default:
}
}
util.StopTimer(timer)
if len(data) > 0 {
log.Trace("Handling: %d data, %v", len(data), data)
p.handle(data...)
@@ -290,12 +281,7 @@ func (p *WorkerPool) doWork(ctx context.Context) {
log.Trace("Worker shutting down")
return
case datum, ok := <-p.dataChan:
if timer.Stop() {
select {
case <-timer.C:
default:
}
}
util.StopTimer(timer)
if !ok {
// the dataChan has been closed - we should finish up:
if len(data) > 0 {

View File

@@ -159,7 +159,7 @@ func GetContents(repo *models.Repository, treePath, ref string, forList bool) (*
}
// Now populate the rest of the ContentsResponse based on entry type
if entry.IsRegular() {
if entry.IsRegular() || entry.IsExecutable() {
contentsResponse.Type = string(ContentTypeRegular)
if blobResponse, err := GetBlobBySHA(repo, entry.ID.String()); err != nil {
return nil, err

21
modules/util/timer.go Normal file
View File

@@ -0,0 +1,21 @@
// Copyright 2020 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package util
import (
"time"
)
// StopTimer is a utility function to safely stop a time.Timer and clean its channel
func StopTimer(t *time.Timer) bool {
stopped := t.Stop()
if !stopped {
select {
case <-t.C:
default:
}
}
return stopped
}

View File

@@ -1062,6 +1062,7 @@ pulls.data_broken = This pull request is broken due to missing fork information.
pulls.files_conflicted = This pull request has changes conflicting with the target branch.
pulls.is_checking = "Merge conflict checking is in progress. Try again in few moments."
pulls.required_status_check_failed = Some required checks were not successful.
pulls.required_status_check_missing = Some required checks are missing.
pulls.required_status_check_administrator = As an administrator, you may still merge this pull request.
pulls.blocked_by_approvals = "This Pull Request doesn't have enough approvals yet. %d of %d approvals granted."
pulls.blocked_by_rejection = "This Pull Request has changes requested by an official reviewer."

View File

@@ -5,6 +5,7 @@
package repo
import (
"fmt"
"net/http"
"time"
@@ -289,7 +290,15 @@ func DeleteTime(ctx *context.APIContext) {
time, err := models.GetTrackedTimeByID(ctx.ParamsInt64(":id"))
if err != nil {
ctx.Error(500, "GetTrackedTimeByID", err)
if models.IsErrNotExist(err) {
ctx.NotFound(err)
return
}
ctx.Error(http.StatusInternalServerError, "GetTrackedTimeByID", err)
return
}
if time.Deleted {
ctx.NotFound(fmt.Errorf("tracked time [%d] already deleted", time.ID))
return
}

View File

@@ -329,8 +329,27 @@ func ServCommand(ctx *macaron.Context) {
results.RepoID = repo.ID
}
// Finally if we're trying to touch the wiki we should init it
if results.IsWiki {
// Ensure the wiki is enabled before we allow access to it
if _, err := repo.GetUnit(models.UnitTypeWiki); err != nil {
if models.IsErrUnitTypeNotExist(err) {
ctx.JSON(http.StatusForbidden, map[string]interface{}{
"results": results,
"type": "ErrForbidden",
"err": "repository wiki is disabled",
})
return
}
log.Error("Failed to get the wiki unit in %-v Error: %v", repo, err)
ctx.JSON(http.StatusInternalServerError, map[string]interface{}{
"results": results,
"type": "InternalServerError",
"err": fmt.Sprintf("Failed to get the wiki unit in %s/%s Error: %v", ownerName, repoName, err),
})
return
}
// Finally if we're trying to touch the wiki we should init it
if err = wiki_service.InitWiki(repo); err != nil {
log.Error("Failed to initialize the wiki in %-v Error: %v", repo, err)
ctx.JSON(http.StatusInternalServerError, map[string]interface{}{

View File

@@ -320,9 +320,6 @@ func PrepareCompareDiff(
compareInfo.Commits = models.ParseCommitsWithStatus(compareInfo.Commits, headRepo)
ctx.Data["Commits"] = compareInfo.Commits
ctx.Data["CommitCount"] = compareInfo.Commits.Len()
if ctx.Data["CommitCount"] == 0 {
ctx.Data["PageIsComparePull"] = false
}
if compareInfo.Commits.Len() == 1 {
c := compareInfo.Commits.Front().Value.(models.SignCommitWithStatuses)

View File

@@ -313,6 +313,19 @@ func HTTP(ctx *context.Context) {
}
}
if isWiki {
// Ensure the wiki is enabled before we allow access to it
if _, err := repo.GetUnit(models.UnitTypeWiki); err != nil {
if models.IsErrUnitTypeNotExist(err) {
ctx.HandleText(http.StatusForbidden, "repository wiki is disabled")
return
}
log.Error("Failed to get the wiki unit in %-v Error: %v", repo, err)
ctx.ServerError("GetUnit(UnitTypeWiki) for "+repo.FullName(), err)
return
}
}
environ = append(environ, models.ProtectedBranchRepoID+fmt.Sprintf("=%d", repo.ID))
w := ctx.Resp

View File

@@ -413,9 +413,7 @@ func PrepareViewPullInfo(ctx *context.Context, issue *models.Issue) *git.Compare
}
return false
}
state := pull_service.MergeRequiredContextsCommitStatus(commitStatuses, pull.ProtectedBranch.StatusCheckContexts)
ctx.Data["RequiredStatusCheckState"] = state
ctx.Data["IsRequiredStatusCheckSuccess"] = state.IsSuccess()
ctx.Data["RequiredStatusCheckState"] = pull_service.MergeRequiredContextsCommitStatus(commitStatuses, pull.ProtectedBranch.StatusCheckContexts)
}
ctx.Data["HeadBranchMovedOn"] = headBranchSha != sha

View File

@@ -668,6 +668,14 @@ func RegisterRoutes(m *macaron.Macaron) {
m.Post("/:username/:reponame/action/:action", reqSignIn, context.RepoAssignment(), context.UnitTypes(), repo.Action)
// Grouping for those endpoints not requiring authentication
m.Group("/:username/:reponame", func() {
m.Group("/milestone", func() {
m.Get("/:id", repo.MilestoneIssuesAndPulls)
}, reqRepoIssuesOrPullsReader, context.RepoRef())
}, context.RepoAssignment(), context.UnitTypes())
// Grouping for those endpoints that do require authentication
m.Group("/:username/:reponame", func() {
m.Group("/issues", func() {
m.Combo("/new").Get(context.RepoRef(), repo.NewIssue).
@@ -723,9 +731,6 @@ func RegisterRoutes(m *macaron.Macaron) {
m.Post("/:id/:action", repo.ChangeMilestonStatus)
m.Post("/delete", repo.DeleteMilestone)
}, context.RepoMustNotBeArchived(), reqRepoIssuesOrPullsWriter, context.RepoRef())
m.Group("/milestone", func() {
m.Get("/:id", repo.MilestoneIssuesAndPulls)
}, reqRepoIssuesOrPullsReader, context.RepoRef())
m.Combo("/compare/*", repo.MustBeNotEmpty, reqRepoCodeReader, repo.SetEditorconfigIfExists).
Get(repo.SetDiffViewStyle, repo.CompareDiff).
Post(context.RepoMustNotBeArchived(), reqRepoPullsReader, repo.MustAllowPulls, bindIgnErr(auth.CreateIssueForm{}), repo.CompareAndPullRequestPost)

View File

@@ -211,17 +211,34 @@ func Merge(pr *models.PullRequest, doer *models.User, baseGitRepo *git.Repositor
if err := git.NewCommand("rebase", baseBranch).RunInDirPipeline(tmpBasePath, &outbuf, &errbuf); err != nil {
// Rebase will leave a REBASE_HEAD file in .git if there is a conflict
if _, statErr := os.Stat(filepath.Join(tmpBasePath, ".git", "REBASE_HEAD")); statErr == nil {
// The original commit SHA1 that is failing will be in .git/rebase-apply/original-commit
commitShaBytes, readErr := ioutil.ReadFile(filepath.Join(tmpBasePath, ".git", "rebase-apply", "original-commit"))
if readErr != nil {
// Abandon this attempt to handle the error
var commitSha string
ok := false
failingCommitPaths := []string{
filepath.Join(tmpBasePath, ".git", "rebase-apply", "original-commit"), // Git < 2.26
filepath.Join(tmpBasePath, ".git", "rebase-merge", "stopped-sha"), // Git >= 2.26
}
for _, failingCommitPath := range failingCommitPaths {
if _, statErr := os.Stat(filepath.Join(failingCommitPath)); statErr == nil {
commitShaBytes, readErr := ioutil.ReadFile(filepath.Join(failingCommitPath))
if readErr != nil {
// Abandon this attempt to handle the error
log.Error("git rebase staging on to base [%s:%s -> %s:%s]: %v\n%s\n%s", pr.HeadRepo.FullName(), pr.HeadBranch, pr.BaseRepo.FullName(), pr.BaseBranch, err, outbuf.String(), errbuf.String())
return fmt.Errorf("git rebase staging on to base [%s:%s -> %s:%s]: %v\n%s\n%s", pr.HeadRepo.FullName(), pr.HeadBranch, pr.BaseRepo.FullName(), pr.BaseBranch, err, outbuf.String(), errbuf.String())
}
commitSha = strings.TrimSpace(string(commitShaBytes))
ok = true
break
}
}
if !ok {
log.Error("Unable to determine failing commit sha for this rebase message. Cannot cast as models.ErrRebaseConflicts.")
log.Error("git rebase staging on to base [%s:%s -> %s:%s]: %v\n%s\n%s", pr.HeadRepo.FullName(), pr.HeadBranch, pr.BaseRepo.FullName(), pr.BaseBranch, err, outbuf.String(), errbuf.String())
return fmt.Errorf("git rebase staging on to base [%s:%s -> %s:%s]: %v\n%s\n%s", pr.HeadRepo.FullName(), pr.HeadBranch, pr.BaseRepo.FullName(), pr.BaseBranch, err, outbuf.String(), errbuf.String())
}
log.Debug("RebaseConflict at %s [%s:%s -> %s:%s]: %v\n%s\n%s", strings.TrimSpace(string(commitShaBytes)), pr.HeadRepo.FullName(), pr.HeadBranch, pr.BaseRepo.FullName(), pr.BaseBranch, err, outbuf.String(), errbuf.String())
log.Debug("RebaseConflict at %s [%s:%s -> %s:%s]: %v\n%s\n%s", commitSha, pr.HeadRepo.FullName(), pr.HeadBranch, pr.BaseRepo.FullName(), pr.BaseBranch, err, outbuf.String(), errbuf.String())
return models.ErrRebaseConflicts{
Style: mergeStyle,
CommitSHA: strings.TrimSpace(string(commitShaBytes)),
CommitSHA: commitSha,
StdOut: outbuf.String(),
StdErr: errbuf.String(),
Err: err,
@@ -268,6 +285,10 @@ func Merge(pr *models.PullRequest, doer *models.User, baseGitRepo *git.Repositor
return err
}
if err = pr.Issue.LoadPoster(); err != nil {
log.Error("LoadPoster: %v", err)
return fmt.Errorf("LoadPoster: %v", err)
}
sig := pr.Issue.Poster.NewGitSig()
if signArg == "" {
if err := git.NewCommand("commit", fmt.Sprintf("--author='%s <%s>'", sig.Name, sig.Email), "-m", message).RunInDirTimeoutEnvPipeline(env, -1, tmpBasePath, &outbuf, &errbuf); err != nil {

View File

@@ -31,32 +31,19 @@ func DownloadPatch(pr *models.PullRequest, w io.Writer, patch bool) error {
// DownloadDiffOrPatch will write the patch for the pr to the writer
func DownloadDiffOrPatch(pr *models.PullRequest, w io.Writer, patch bool) error {
// Clone base repo.
tmpBasePath, err := createTemporaryRepo(pr)
if err != nil {
log.Error("CreateTemporaryPath: %v", err)
if err := pr.LoadBaseRepo(); err != nil {
log.Error("Unable to load base repository ID %d for pr #%d [%d]", pr.BaseRepoID, pr.Index, pr.ID)
return err
}
defer func() {
if err := models.RemoveTemporaryPath(tmpBasePath); err != nil {
log.Error("DownloadDiff: RemoveTemporaryPath: %s", err)
}
}()
gitRepo, err := git.OpenRepository(tmpBasePath)
gitRepo, err := git.OpenRepository(pr.BaseRepo.RepoPath())
if err != nil {
return fmt.Errorf("OpenRepository: %v", err)
}
defer gitRepo.Close()
pr.MergeBase, err = git.NewCommand("merge-base", "--", "base", "tracking").RunInDir(tmpBasePath)
if err != nil {
pr.MergeBase = "base"
}
pr.MergeBase = strings.TrimSpace(pr.MergeBase)
if err := gitRepo.GetDiffOrPatch(pr.MergeBase, "tracking", w, patch); err != nil {
log.Error("Unable to get patch file from %s to %s in %s/%s Error: %v", pr.MergeBase, pr.HeadBranch, pr.BaseRepo.MustOwner().Name, pr.BaseRepo.Name, err)
return fmt.Errorf("Unable to get patch file from %s to %s in %s/%s Error: %v", pr.MergeBase, pr.HeadBranch, pr.BaseRepo.MustOwner().Name, pr.BaseRepo.Name, err)
if err := gitRepo.GetDiffOrPatch(pr.MergeBase, pr.GetGitRefName(), w, patch); err != nil {
log.Error("Unable to get patch file from %s to %s in %s Error: %v", pr.MergeBase, pr.HeadBranch, pr.BaseRepo.FullName(), err)
return fmt.Errorf("Unable to get patch file from %s to %s in %s Error: %v", pr.MergeBase, pr.HeadBranch, pr.BaseRepo.FullName(), err)
}
return nil
}

View File

@@ -59,7 +59,7 @@
{{if .IsNothingToCompare}}
<div class="ui segment">{{.i18n.Tr "repo.pulls.nothing_to_compare"}}</div>
{{else if .PageIsComparePull}}
{{else if and .PageIsComparePull (gt .CommitCount 0)}}
{{if .HasPullRequest}}
<div class="ui segment">
{{.i18n.Tr "repo.pulls.has_pull_request" $.RepoLink $.RepoRelPath .PullRequest.Index | Safe}}

View File

@@ -86,7 +86,7 @@
{{if and (.Permission.CanRead $.UnitTypeReleases) (not .IsEmptyRepo) }}
<a class="{{if .PageIsReleaseList}}active{{end}} item" href="{{.RepoLink}}/releases">
<i class="octicon octicon-tag"></i> {{.i18n.Tr "repo.releases"}} <span class="ui {{if not .Repository.NumReleases}}gray{{else}}blue{{end}} small label">{{.Repository.NumReleases}}</span>
<i class="octicon octicon-tag"></i> {{.i18n.Tr "repo.releases"}} <span class="ui {{if not .NumReleases}}gray{{else}}blue{{end}} small label">{{.NumReleases}}</span>
</a>
{{end}}

View File

@@ -43,7 +43,7 @@
{{else if .IsBlockedByApprovals}}red
{{else if .IsBlockedByRejection}}red
{{else if and .EnableStatusCheck (or .RequiredStatusCheckState.IsFailure .RequiredStatusCheckState.IsError)}}red
{{else if and .EnableStatusCheck (or .RequiredStatusCheckState.IsPending .RequiredStatusCheckState.IsWarning)}}yellow
{{else if and .EnableStatusCheck (or (not $.LatestCommitStatus) .RequiredStatusCheckState.IsPending .RequiredStatusCheckState.IsWarning)}}yellow
{{else if .Issue.PullRequest.IsChecking}}yellow
{{else if .Issue.PullRequest.CanAutoMerge}}green
{{else}}red{{end}}"><span class="mega-octicon octicon-git-merge"></span></a>
@@ -112,7 +112,7 @@
<span class="octicon octicon-sync"></span>
{{$.i18n.Tr "repo.pulls.is_checking"}}
</div>
{{else if and (not .Issue.PullRequest.CanAutoMerge) .EnableStatusCheck (not .IsRequiredStatusCheckSuccess)}}
{{else if and (not .Issue.PullRequest.CanAutoMerge) .EnableStatusCheck (not .RequiredStatusCheckState.IsSuccess)}}
<div class="item text red">
<span class="octicon octicon-x"></span>
{{$.i18n.Tr "repo.pulls.required_status_check_failed"}}
@@ -123,9 +123,14 @@
<span class="octicon octicon-x"></span>
{{$.i18n.Tr "repo.pulls.required_status_check_failed"}}
</div>
{{else if and .EnableStatusCheck (not .RequiredStatusCheckState.IsSuccess)}}
<div class="item text red">
<span class="octicon octicon-x"></span>
{{$.i18n.Tr "repo.pulls.required_status_check_missing"}}
</div>
{{end}}
{{if or $.IsRepoAdmin (not .EnableStatusCheck) .IsRequiredStatusCheckSuccess}}
{{if and $.IsRepoAdmin .EnableStatusCheck (not .IsRequiredStatusCheckSuccess)}}
{{if or $.IsRepoAdmin (not .EnableStatusCheck) .RequiredStatusCheckState.IsSuccess}}
{{if and $.IsRepoAdmin .EnableStatusCheck (not .RequiredStatusCheckState.IsSuccess)}}
<div class="item text yellow">
<span class="octicon octicon-primitive-dot"></span>
{{$.i18n.Tr "repo.pulls.required_status_check_administrator"}}

View File

@@ -64,7 +64,7 @@
<td>
<span class="truncate">
<span class="octicon octicon-file-submodule"></span>
{{$refURL := $commit.RefURL AppUrl $.BranchLink}}
{{$refURL := $commit.RefURL AppUrl $.Repository.FullName}}
{{if $refURL}}
<a href="{{$refURL}}">{{$entry.Name}}</a> @ <a href="{{$refURL}}/commit/{{$commit.RefID}}">{{ShortSha $commit.RefID}}</a>
{{else}}

View File

@@ -1,7 +1,7 @@
{{template "base/head" .}}
{{if .IsRepo}}<div class="repository">{{template "repo/header" .}}</div>{{end}}
<div class="ui container center">
<p style="margin-top: 100px"><img src="{{StaticUrlPrefix}}/img/404.png" alt="404"/></p>
<p style="margin-top: 100px"><img class="ui centered image" src="{{StaticUrlPrefix}}/img/404.png" alt="404"/></p>
<div class="ui divider"></div>
<br>
{{if .ShowFooterVersion}}<p>Application Version: {{AppVer}}</p>{{end}}

View File

@@ -1,6 +1,6 @@
{{template "base/head" .}}
<div class="ui container center">
<p style="margin-top: 100px"><img src="{{StaticUrlPrefix}}/img/500.png" alt="500"/></p>
<p style="margin-top: 100px"><img class="ui centered image" src="{{StaticUrlPrefix}}/img/500.png" alt="500"/></p>
<div class="ui divider"></div>
<br>
{{if .ErrorMsg}}<p>An error has occurred :</p>

View File

@@ -92,7 +92,7 @@
<form action="{{AppSubUrl}}/user/settings/account/email" method="post">
{{$.CsrfTokenHtml}}
<input name="_method" type="hidden" value="SENDACTIVATION">
<input name="id" type="hidden" value="{{if .IsPrimary}}PRIMARY{{else}}}.ID{{end}}">
<input name="id" type="hidden" value="{{if .IsPrimary}}PRIMARY{{else}}{{.ID}}{{end}}">
{{if $.ActivationsPending}}
<button disabled class="ui blue tiny button">{{$.i18n.Tr "settings.activations_pending"}}</button>
{{else}}

View File

@@ -1098,6 +1098,7 @@ function initRepository() {
$repoComparePull.find('button.show-form').on('click', function (e) {
e.preventDefault();
$repoComparePull.find('.pullrequest-form').show();
autoSimpleMDE.codemirror.refresh();
$(this).parent().hide();
});
}