Julia is a language meant for working with data. In that regard, we would like to be able to be able to perform analyses like linear regression and hypothesis testing. We would also like to perform more complex operations including building neural networks and running algorithms. Thankfully, there are packages that allow us to easily perform these tasks.
In this presentation, I will be focusing on three different packages; MLJ.jl, Graphs.jl, and JuMP.jl.
MLJ.jl is a high-level machine learning framework, while Graphs.jl is meant for network analysis. JuMP.jl is a mathematical optimization framework.
MLJ is a machine learning package that is similar to Python’s scikit-learn, due to its breadth and flexibility in performing different kinds of analyses.
Resolving package versions...
No Changes to `C:\Users\joshu\.julia\environments\v1.11\Project.toml`
No Changes to `C:\Users\joshu\.julia\environments\v1.11\Manifest.toml`
There are numerous models we can configure and implement for many different machine learning problems, as shown by listing out the possible models.
One example that we can look at is implementing linear regression on the Boston housing dataset. We would like to model the prices of properties as functions of the other variables.
We start by loading the dataset and then dividing the data into training and testing data. This can be done without any external tools.
usingMLJ, DataFrames, MLJLinearModels, StatsBase# Load datasetX, y =@load_boston# Convert X to a DataFrameX =DataFrame(X)# Split data into training (80%) and test (20%) setstrain, test =partition(eachindex(y), 0.8, shuffle=true)X_train, X_test = X[train, :], X[test, :]y_train, y_test = y[train], y[test]
Now, we load an MLJ model. Specifically, we will use LinearRegressor, which is a simple linear regression model. MLJ offers other alternatives, including RidgeRegressor and LassoRegressor, as well as linear model classifiers like LogisticClassifier.
# Load Linear Regression model@load LinearRegressor pkg=MLJLinearModelsmodel =LinearRegressor()# Create a machine (model + data)mach =machine(model, X_train, y_train)# Train the modelfit!(mach)
[ Info: For silent loading, specify `verbosity=0`.
After fitting the model, we can now make predictions to see how accurate our model is! We can use the StatsBase package to calculate metrics like R-squared and MAE.
On the other hand, if you are interested in solving problems with deep learning techniques, Flux.jl is a popular option for that. Flux is a deep learning framework that has a lot of the same functionalities as MLJ, but is more focused on deep learning and allowing the user to control those areas of computation.
MLJ is more suited towards high-level implementations. It supports diverse approaches to problems, and allows users to pull together and integrate multiple models, without requiring low-level mastery. However, it does not inherently support GPU acceleration, which makes it slower when running large tasks.
The last two packages I want to go over are Graphs.jl and JuMP.jl. While MLJ and Flux are meant for machine learning, there may be times when a user is interested in implementing optimization algorithms, like Dijkstra’s or Simplex. These algorithms are of a different flavor as they are not necessarily meant to solve prediction or inference problems, but are many times useful for finding optimal paths or augmentations for a specific task.
We can see how Graphs.jl can be useful through an implementation of Dijkstra’s Algorithm.
Below is a raw implementation of the algorithm on a simple graph, with no external packages being used.
functiondijkstra(graph::Dict{Int, Dict{Int, Int}}, start::Int)# Initialize distances to infinity, except for the start node dist =Dict(k =>Inffor k inkeys(graph)) dist[start] =0# Track visited nodes visited =Dict(k =>falsefor k inkeys(graph))# Track shortest paths prev =Dict{Int, Union{Nothing, Int}}(k =>nothingfor k inkeys(graph))whiletrue# Select the unvisited node with the smallest distance u =nothingfor node inkeys(graph)if !visited[node] && (u ===nothing|| dist[node] < dist[u]) u = nodeendend# Stop if there are no reachable unvisited nodesif u ===nothing|| dist[u] ==Infbreakend# Mark node as visited visited[u] =true# Update distances to neighborsfor (v, weight) in graph[u] alt = dist[u] + weightif alt < dist[v] dist[v] = alt prev[v] = uendendendreturn dist, prevend# Function to reconstruct the shortest pathfunctionshortest_path(prev::Dict{Int, Union{Nothing, Int}}, target::Int) path = []while target !==nothingpushfirst!(path, target) target = prev[target]endreturn pathend# Define a weighted graph using a properly typed dictionarygraph =Dict{Int, Dict{Int, Int}}(1=>Dict(2=>4, 3=>1),2=>Dict(4=>1),3=>Dict(2=>2, 4=>5),4=>Dict(5=>3),5=>Dict())# Run Dijkstra's algorithm from node 1distances, predecessors =dijkstra(graph, 1)# Print shortest distance from node 1 to 5println("Shortest distance from 1 to 5: ", distances[5])# Get the shortest path from node 1 to 5path =shortest_path(predecessors, 5)println("Shortest path from 1 to 5: ", path)
Shortest distance from 1 to 5: 7.0
Any[1, 3, 2, 4, 5] 1 to 5:
The code is quite bulky and requires a lot of work to develop. By using the Graphs package, we can implement a shorter, more elegant solution to the same problem.
usingPkgPkg.add("Graphs")usingGraphs
Resolving package versions...
No Changes to `C:\Users\joshu\.julia\environments\v1.11\Project.toml`
No Changes to `C:\Users\joshu\.julia\environments\v1.11\Manifest.toml`
# Create a directed graph with 5 nodesg =SimpleDiGraph(5)# Define weighted edgesedges = [ (1, 2, 4), (1, 3, 1), (3, 2, 2), (3, 4, 5), (2, 4, 1), (4, 5, 3)]# Add edges to the graphfor (u, v, _) in edgesadd_edge!(g, u, v) # Add edgeend# Create a weight matrix (initialize with Inf)weights =fill(Inf, nv(g), nv(g))# Assign weights to the adjacency matrixfor (u, v, w) in edges weights[u, v] = wend# Run Dijkstra's Algorithm from node 1result =dijkstra_shortest_paths(g, 1, weights)# Get the shortest path distance to node 5println("Shortest distance from node 1 to 5: ", result.dists[5])# Retrieve the shortest path to node 5functionget_path(result, target) path = []while target !=0pushfirst!(path, target) target = result.parents[target]endreturn pathendbest_path =get_path(result, 5)println("Shortest path from node 1 to 5: ", best_path)
Shortest distance from node 1 to 5: 7.0
Shortest path from node 1 to 5: Any[1, 3, 2, 4, 5]
We can see that the Graphs package is quite useful for the representation of non-numeric objects. Networks are the foundation of many important problems, and combining them with the numerical computing power of Julia can help to solve many problems.
Finally, we will go over a mathematical optimization framework called JuMP.jl, as well as a mathematical solver package called Gurobi. Mathematical optimization is at the core of many different problems like portfolio optimization, control theory, and other various fields. Being able to solve optimization problems as fast as possible carries a lot of importance in fields like supply chain logistics, asset allocation, etc.
JuMP works within Julia but acts like its own language. By defining objects like objective functions and constraints, JuMP becomes a versatile tool for constructing and solving problems like Linear Programs, Quadratic Programs, and Mixed Integer Optimization problems.
On the flip side, solvers like Gurobi are the workhorses for solving these problems. Gurobi in particular is the world’s fastest mathematical solver, due to its ability to adjust its solving strategies to each problem as well as the extreme level of care put into optimizing each process through parallel computing and fast code.
Resolving package versions...
No Changes to `C:\Users\joshu\.julia\environments\v1.11\Project.toml`
No Changes to `C:\Users\joshu\.julia\environments\v1.11\Manifest.toml`
Resolving package versions...
No Changes to `C:\Users\joshu\.julia\environments\v1.11\Project.toml`
No Changes to `C:\Users\joshu\.julia\environments\v1.11\Manifest.toml`
Resolving package versions...
No Changes to `C:\Users\joshu\.julia\environments\v1.11\Project.toml`
No Changes to `C:\Users\joshu\.julia\environments\v1.11\Manifest.toml`
Gurobi Optimizer version 12.0.1 build v12.0.1rc0 (win64 - Windows 11.0 (26100.2))
CPU model: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 2 rows, 2 columns and 4 nonzeros
Model fingerprint: 0x9f631cdb
Coefficient statistics:
Matrix range [1e+00, 6e+00]
Objective range [4e+00, 5e+00]
Bounds range [0e+00, 0e+00]
RHS range [6e+00, 2e+01]
Presolve time: 0.01s
Presolved: 2 rows, 2 columns, 4 nonzeros
Iteration Objective Primal Inf. Dual Inf. Time
0 9.0000000e+30 2.750000e+30 9.000000e+00 0s
2 2.1000000e+01 0.000000e+00 0.000000e+00 0s
Solved in 2 iterations and 0.02 seconds (0.00 work units)
Optimal objective 2.100000000e+01
User-callback calls 47, time in user-callback 0.00 sec
Optimal x1 = 3.0
Optimal x2 = 1.5
Optimal objective value = 21.0
JuMP can also be used to solve other genres of optimization problems. For example, let us consider an MIO problem of the form:
\[
\displaystyle \max_{x_1,x_2} 2x_1 + 3x_2
\]
\[
\displaystyle 4x_1 + 3x_2 ≤ 12
\]
\[
\displaystyle 2x_1 + x_2 ≤ 6
\]
\[
\displaystyle x_1, x_2 ≥ 0
\]
\[
\displaystyle x_1, x_2 ∈ ℤ
\]
usingJuMP, Gurobimodel = JuMP.Model(Gurobi.Optimizer)@variable(model, x1 >=0, Int) # Must specify as Int@variable(model, x2 >=0, Int) # Must specify as Int@objective(model, Max, 2x1 +3x2)@constraint(model, 4x1 +3x2 <=12)@constraint(model, 2x1 + x2 <=6)optimize!(model)println("Optimal x1 = ", value(x1))println("Optimal x2 = ", value(x2))println("Optimal objective value = ", objective_value(model))
Set parameter Username
Set parameter LicenseID to value 2630108
Academic license - for non-commercial use only - expires 2026-03-02
Gurobi Optimizer version 12.0.1 build v12.0.1rc0 (win64 - Windows 11.0 (26100.2))
CPU model: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 2 rows, 2 columns and 4 nonzeros
Model fingerprint: 0xb17432a7
Variable types: 0 continuous, 2 integer (0 binary)
Coefficient statistics:
Matrix range [1e+00, 4e+00]
Objective range [2e+00, 3e+00]
Bounds range [0e+00, 0e+00]
RHS range [6e+00, 1e+01]
Found heuristic solution: objective 6.0000000
Presolve removed 2 rows and 2 columns
Presolve time: 0.02s
Presolve: All rows and columns removed
Explored 0 nodes (0 simplex iterations) in 0.03 seconds (0.00 work units)
Thread count was 1 (of 8 available processors)
Solution count 2: 12 6
Optimal solution found (tolerance 1.00e-04)
Best objective 1.200000000000e+01, best bound 1.200000000000e+01, gap 0.0000%
User-callback calls 198, time in user-callback 0.00 sec
-0.0mal x1 =
Optimal x2 = 4.0
Optimal objective value = 12.0
We can try a large problem. MIO is an especially difficult problem to solve, and when we have a lot of variables and constraints to fulfill, it can take more and more time to solve. JuMP and Gurobi offer functionalities useful in working around potentially nasty problems; we can adjust the number of the threads being used, and we can tell the solver that we are willing to accept certain levels of optimality gaps, in order to speed up solving.
usingJuMP, Gurobi, Random# Use MOI directly from MathOptInterface (no need to redefine it)Random.seed!(42)# Create the model using Gurobimodel = JuMP.Model(Gurobi.Optimizer)# Problem size parametersnum_int_vars =300# Integer variablesnum_bin_vars =150# Binary variablesnum_constraints =500# Total constraints# Generate random coefficients ensuring feasibility & boundednessc =rand(10:100, num_int_vars) # Objective coefficients for xd =rand(5:50, num_bin_vars) # Objective coefficients for ye=rand(1:5, num_int_vars) # Small penalty to prevent unbounded solutionsa =rand(1:5, num_constraints, num_int_vars) # Constraint coefficients for x (smaller values)b =rand(1:3, num_constraints, num_bin_vars) # Constraint coefficients for y (smaller values)k =sum(a, dims=2)[:, 1] *50+sum(b, dims=2)[:, 1] *1# Ensures feasibility# Define integer variables (1 ≤ x_i ≤ 100, x_i ∈ ℤ)@variable(model, 1<= x[1:num_int_vars] <=100, Int)# Define binary variables (y_j ∈ {0,1})@variable(model, y[1:num_bin_vars], Bin)# Define the objective function (maximize Z)@objective(model, Max, sum(c[i] * x[i] for i in1:num_int_vars) +sum(d[j] * y[j] for j in1:num_bin_vars) -sum(e[i] * x[i]^2for i in1:num_int_vars) # Small quadratic penalty to ensure boundedness)# Add constraints ensuring feasibilityfor j in1:num_constraints@constraint(model, sum(a[j, i] * x[i] for i in1:num_int_vars) +sum(b[j, j] * y[j] for j in1:num_bin_vars) <= k[j] )end# Increase solver difficulty while keeping it feasibleset_optimizer_attribute(model, "TimeLimit", 120) # Give Gurobi 2 minutesset_optimizer_attribute(model, "MIPGap", 0.02) # Allow 2% optimality gap# set_optimizer_attribute(model, "Threads", 1) # Single-threaded (slows down solving)# set_optimizer_attribute(model, "Presolve", 2) # Enable presolve for better performance# set_optimizer_attribute(model, "Cuts", 1) # Allow cuts to help find solutions# Solve the problem and track time@timeoptimize!(model)println("Optimal Objective Value: ", objective_value(model))println("First 10 integer variable values: ", [value(x[i]) for i in1:10])println("First 10 binary variable values: ", [value(y[j]) for j in1:10])
Set parameter Username
Set parameter LicenseID to value 2630108
Academic license - for non-commercial use only - expires 2026-03-02
Set parameter TimeLimit to value 120
Set parameter MIPGap to value 0.02
Set parameter MIPGap to value 0.02
Set parameter TimeLimit to value 120
Gurobi Optimizer version 12.0.1 build v12.0.1rc0 (win64 - Windows 11.0 (26100.2))
CPU model: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Non-default parameters:
TimeLimit 120
MIPGap 0.02
Optimize a model with 500 rows, 450 columns and 225000 nonzeros
Model fingerprint: 0xe3ee7d39
Model has 300 quadratic objective terms
Variable types: 0 continuous, 450 integer (150 binary)
Coefficient statistics:
Matrix range [1e+00, 5e+00]
Objective range [5e+00, 1e+02]
QObjective range [2e+00, 1e+01]
Bounds range [1e+00, 1e+02]
RHS range [4e+04, 5e+04]
Found heuristic solution: objective 15690.000000
Presolve added 1 rows and 0 columns
Presolve removed 0 rows and 58 columns
Presolve time: 0.42s
Presolved: 501 rows, 392 columns, 150592 nonzeros
Presolved model has 300 quadratic objective terms
Variable types: 0 continuous, 392 integer (50 binary)
Found heuristic solution: objective 127175.00000
Root relaxation: objective 1.314432e+05, 95 iterations, 0.03 seconds (0.01 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 131443.208 0 229 127175.000 131361.000 3.29% - 0s
H 0 0 131192.00000 131361.000 0.13% - 0s
Explored 1 nodes (95 simplex iterations) in 0.60 seconds (0.16 work units)
Thread count was 8 (of 8 available processors)
Solution count 3: 131192 127175 15690
Optimal solution found (tolerance 2.00e-02)
Best objective 1.311920000000e+05, best bound 1.313610000000e+05, gap 0.1288%
User-callback calls 248, time in user-callback 0.00 sec
0.862398 seconds (67.22 k allocations: 13.035 MiB, 25.20% compilation time)
Optimal Objective Value: 131192.0
[12.0, 13.0, 14.0, 13.0, 36.0, 7.0, 33.0, 7.0, 26.0, 5.0]
First 10 binary variable values: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
Finally, let’s look at an application of JuMP to a more realistic problem. We will look at the Traveling Salesman Problem, where we want to find the shortest route that visits all nodes . This problem is notoriously difficult to solve because there are so many candidate solutions.
usingJuMP, Gurobi, Plots, Random# Function to generate random city coordinatesfunctiongenerate_cities(n, seed=123)Random.seed!(seed)return [(rand(), rand()) for _ in1:n]end# Compute Euclidean distance matrixfunctioncompute_distance_matrix(cities) n =length(cities) dist_matrix =zeros(n, n)for i in1:n, j in1:n dist_matrix[i, j] =hypot(cities[i][1] - cities[j][1], cities[i][2] - cities[j][2])endreturn dist_matrixend# Solve TSP using JuMP and Gurobifunctionsolve_tsp(dist_matrix) n =size(dist_matrix, 1) model = JuMP.Model(Gurobi.Optimizer)@variable(model, x[1:n, 1:n], Bin)# Objective: Minimize travel distance@objective(model, Min, sum(dist_matrix[i, j] * x[i, j] for i in1:n, j in1:n))# Constraints: Each city must be entered and exited exactly once@constraint(model, [i in1:n], sum(x[i, j] for j in1:n if i != j) ==1)@constraint(model, [j in1:n], sum(x[i, j] for i in1:n if i != j) ==1)# Subtour elimination (MTZ formulation)@variable(model, u[2:n] >=0)@constraint(model, [i in2:n, j in2:n; i != j], u[i] - u[j] + n * x[i, j] ≤ n -1)optimize!(model)iftermination_status(model) == MOI.OPTIMALprintln("Optimal tour found with cost: ", objective_value(model)) tour =Dict{Int, Int}()for i in1:n, j in1:nifvalue(x[i, j]) >0.5 tour[i] = jendendreturn tourelseprintln("No optimal solution found.")returnnothingendend# Function to extract the ordered tour from the dictionaryfunctionget_tour_sequence(tour)if tour ===nothingreturn []end n =length(tour) sequence = [1] # Start from node 1whilelength(sequence) < n sequence =push!(sequence, tour[sequence[end]])endpush!(sequence, 1) # Return to the starting cityreturn sequenceend# Function to plot the TSP solutionfunctionplot_tsp(cities, tour_sequence) x_vals = [cities[i][1] for i in tour_sequence] y_vals = [cities[i][2] for i in tour_sequence]scatter([c[1] for c in cities], [c[2] for c in cities], label="Cities", markersize=5)plot!(x_vals, y_vals, arrow=true, label="Tour", linewidth=2, color=:blue)title!("Optimal TSP Tour")end# Generate cities and solve TSPn =100# Number of citiescities =generate_cities(n)dist_matrix =compute_distance_matrix(cities)tour =solve_tsp(dist_matrix)# Plot the solution if foundif tour !==nothing tour_sequence =get_tour_sequence(tour)plot_tsp(cities, tour_sequence)end
# We can check whether the solution is optimal or not.termination_status(model)
OPTIMAL::TerminationStatusCode = 1
# Solve TSP using JuMP and Gurobifunctionsuboptimal_tsp(dist_matrix) n =size(dist_matrix, 1) model = JuMP.Model(Gurobi.Optimizer)@variable(model, x[1:n, 1:n], Bin)# Objective: Minimize travel distance@objective(model, Min, sum(dist_matrix[i, j] * x[i, j] for i in1:n, j in1:n))# Constraints: Each city must be entered and exited exactly once@constraint(model, [i in1:n], sum(x[i, j] for j in1:n if i != j) ==1)@constraint(model, [j in1:n], sum(x[i, j] for i in1:n if i != j) ==1)# Subtour elimination (MTZ formulation)@variable(model, u[2:n] >=0)@constraint(model, [i in2:n, j in2:n; i != j], u[i] - u[j] + n * x[i, j] ≤ n -1)set_optimizer_attribute(model, "MIPGap", 0.1) # Allow 10% optimality gapoptimize!(model)iftermination_status(model) == MOI.OPTIMALprintln("Optimal tour found with cost: ", objective_value(model)) tour =Dict{Int, Int}()for i in1:n, j in1:nifvalue(x[i, j]) >0.5 tour[i] = jendendreturn tourelseprintln("No optimal solution found.")returnnothingendend
suboptimal_tsp (generic function with 1 method)
# Generate cities and solve TSPn =100# Number of citiescities =generate_cities(n)dist_matrix =compute_distance_matrix(cities)tour =suboptimal_tsp(dist_matrix)# Plot the solution if foundif tour !==nothing tour_sequence =get_tour_sequence(tour)plot_tsp(cities, tour_sequence)end
Set parameter Username
Set parameter LicenseID to value 2630108
Academic license - for non-commercial use only - expires 2026-03-02
Set parameter MIPGap to value 0.1
Set parameter MIPGap to value 0.1
Gurobi Optimizer version 12.0.1 build v12.0.1rc0 (win64 - Windows 11.0 (26100.2))
CPU model: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Non-default parameters:
MIPGap 0.1
Optimize a model with 9902 rows, 10099 columns and 48906 nonzeros
Model fingerprint: 0x325090e3
Variable types: 99 continuous, 10000 integer (10000 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+02]
Objective range [7e-03, 1e+00]
Bounds range [0e+00, 0e+00]
RHS range [1e+00, 1e+02]
Presolve removed 0 rows and 100 columns
Presolve time: 0.20s
Presolved: 9902 rows, 9999 columns, 48906 nonzeros
Variable types: 99 continuous, 9900 integer (9900 binary)
Root relaxation: objective 6.147645e+00, 328 iterations, 0.03 seconds (0.01 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 6.14765 0 164 - 6.14765 - - 0s
0 0 6.84839 0 206 - 6.84839 - - 1s
0 0 6.84840 0 202 - 6.84840 - - 1s
0 0 6.95682 0 208 - 6.95682 - - 1s
0 0 7.00323 0 195 - 7.00323 - - 2s
0 0 7.00620 0 196 - 7.00620 - - 2s
0 0 7.00622 0 201 - 7.00622 - - 2s
0 0 7.03573 0 197 - 7.03573 - - 2s
0 0 7.04685 0 198 - 7.04685 - - 2s
0 0 7.06155 0 198 - 7.06155 - - 2s
0 0 7.14312 0 204 - 7.14312 - - 2s
0 0 7.14456 0 204 - 7.14456 - - 3s
0 0 7.14456 0 182 - 7.14456 - - 3s
H 0 0 14.1468348 7.14595 49.5% - 3s
0 0 7.14595 0 182 14.14683 7.14595 49.5% - 3s
0 0 7.14595 0 182 14.14683 7.14595 49.5% - 3s
H 0 0 11.2270301 7.14595 36.4% - 6s
H 0 0 10.8105846 7.14595 33.9% - 6s
0 0 7.14595 0 182 10.81058 7.14595 33.9% - 6s
H 0 0 9.8551436 7.15168 27.4% - 8s
H 0 0 9.8461639 7.15168 27.4% - 8s
0 0 7.15168 0 182 9.84616 7.15168 27.4% - 8s
0 2 7.15168 0 182 9.84616 7.15168 27.4% - 9s
227 240 7.64367 53 176 9.84616 7.15168 27.4% 24.9 10s
H 1775 1720 9.8394427 7.15537 27.3% 12.4 13s
2378 2311 9.21343 203 44 9.83944 7.15537 27.3% 11.7 15s
H 2388 2353 9.7590871 7.15537 26.7% 11.7 15s
H 2421 2350 9.7419591 7.15537 26.6% 11.6 15s
H 2543 2450 9.6897273 7.15573 26.2% 11.5 17s
H 2543 2410 9.4987109 7.15573 24.7% 11.5 18s
2546 2412 7.91052 134 164 9.49871 7.15573 24.7% 11.4 20s
H 2546 2291 9.4753434 7.15573 24.5% 11.4 21s
2551 2295 7.73303 111 176 9.47534 7.21544 23.9% 11.4 25s
2560 2301 7.49449 78 97 9.47534 7.26841 23.3% 11.4 30s
2567 2305 9.06937 193 176 9.47534 7.27075 23.3% 11.4 35s
2576 2313 7.94340 112 215 9.47534 7.27349 23.2% 13.5 40s
H 2577 2198 8.1390101 7.27349 10.6% 13.5 41s
H 2580 2090 7.8040188 7.27352 6.80% 13.5 44s
Cutting planes:
Learned: 19
Gomory: 65
Cover: 3
Implied bound: 30
Projected implied bound: 3
MIR: 14
Mixing: 1
Flow cover: 80
Zero half: 14
RLT: 2
Explored 2580 nodes (39804 simplex iterations) in 44.67 seconds (26.20 work units)
Thread count was 8 (of 8 available processors)
Solution count 10: 7.80402 8.13901 9.47534 ... 9.85514
Optimal solution found (tolerance 1.00e-01)
Best objective 7.804018754508e+00, best bound 7.273518737937e+00, gap 6.7978%
User-callback calls 17987, time in user-callback 0.02 sec
7.804018754508002d with cost:
Unfortunately, JuMP does not support GPU acceleration, but some solvers do! However, GPUs are generally not well-suited for many of the problems that JuMP was built to solve. This is because problems like linear programs do not have thousands of processes to be run at the same time.