Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: implementation of fixed-size arrays #7568

Closed
wants to merge 1 commit into from
Closed

Conversation

timholy
Copy link
Member

@timholy timholy commented Jul 11, 2014

This implements fixed-size arrays (whose size is part of the type parameters). Compared to the proof-of-concept in #5857, it solves the constructor problems. There was some debate in #5857 (and echoed on julia-dev) about whether mutable or immutable would be more desirable, so for fun this implements both.

Ultimately, the multiplication routine will be a nice application of staged functions. As it is, this defines 128 new methods for *.

This has uncovered some type-inference problems, to be filed in other issues. Given that I'd expect such problems to destroy performance, it's shockingly good. (Of the fixed-size tests, only the last line below does not suffer from type-stability problems. Hence, it is more illustrative of what one might ultimately hope for.)

Timing info:

f(n::Integer, A, B) = (C = A*B; for i = 1:n; C = A*B; end)
g(n::Integer, A, B) = (C = A*B; for i = 1:n; A_mul_B!(C,A,B); end)
A = rand(2,2)
B = rand(2,3)
AfI = FixedArrayI(A)
BfI = FixedArrayI(B)
AfM = FixedArrayM(A)
BfM = FixedArrayM(B)
# hide warmup
julia> @time f(10^6, A, B)
elapsed time: 0.505924798 seconds (128000224 bytes allocated, 18.89% gc time)

julia> @time f(10^6, AfI, BfI)
elapsed time: 1.244501688 seconds (504000600 bytes allocated, 28.19% gc time)

julia> @time f(10^6, AfM, BfM)
elapsed time: 0.632096601 seconds (144000240 bytes allocated, 14.57% gc time)

julia> @time g(10^6, A, B)
elapsed time: 0.349824437 seconds (224 bytes allocated)

julia> @time g(10^6, AfM, BfM)
elapsed time: 0.056289786 seconds (240 bytes allocated)

This creates both mutable and immutable versions
@tknopp
Copy link
Contributor

tknopp commented Jul 11, 2014

While I really would love to get this addressed, I still think that it is crucial that the memory layout is tight. Unless this is solved there would be no way around ImmutableArrays.jl for me.

The use case I have is the representation of a vector field. This could be done by Array{Float64,4} with size (3,Nx,Ny,Nz) or if we would have a tight layout Array{Vec3{Float64}, 3} with size (Nx,Ny,Nz)

@SimonDanisch
Copy link
Contributor

I'm very tempted to use this instead of ImmutableArrays, as the dimension is part of the type parameter, which would make thinks a lot easier for me. Especially for having generic geometry types which work for 2D/3D/4D.
I'm a little sad, that it is quite a bit slower than ImmutableArrays though.

#after warmup:
elapsed time: 0.326852152 seconds (128000144 bytes allocated, 16.11% gc time)  # Array
elapsed time: 0.732040295 seconds (504000520 bytes allocated, 28.74% gc time)  # FixedArrayI
elapsed time: 0.357538889 seconds (144000160 bytes allocated, 14.63% gc time)  # FixedArrayM
elapsed time: 0.004746544 seconds (16 bytes allocated) # ImmutableArrays

So the question is: is there, besides what you already mentioned, a reason that FixedArrays will never be as fast as ImmutableArrays?
Is there a way of combining both approaches?
@twadleigh is there a way of having your types in the way of FixedVector{T,Cardinality}, FixedMatrix{T, M,N}?
If I understand this correctly, that's whats introducing the type stability problems and the performance drop.
Would it be a workaround, to have type stable construction functions like Vector3{T} => FixedVector{T, 3}?

I'm poking a little bit in the dark here, as I'm not really familiar with the underlying problems;)

@SimonDanisch
Copy link
Contributor

By the way, how do I get a pointer to a FixedArray, without accessing data?
Especially if I'm having an array of FixedArrays?
Is there no easy way, or am I missing something?

@timholy
Copy link
Member Author

timholy commented Aug 2, 2014

Can you post your test code? I don't get the same results as you, and I'm wondering if something is getting compiled out in the test of ImmutableArrays.

Here's my test:

using ImmutableArrays

function testimmut(A, B, n)
    local C
    for i = 1:n
    C = A*B
    end
    C
end
function testmut(A, B, n)
    C = A*B
    for i = 1:n
    A_mul_B!(C, A, B)
    end
    C
end

A = rand(1:10, 2, 3)
B = rand(1:10, 3, 3)
IA = Matrix2x3(A)
IB = Matrix3x3(B)
testimmut(IA, IB, 1)
println("ImmutableArrays")
@time testimmut(IA, IB, 10^6)
FMA = FixedArrays.FixedArrayM(A)
FMB = FixedArrays.FixedArrayM(B)
testmut(FMA, FMB, 1)
println("FixedM, mutating & preallocated")
@time testmut(FMA, FMB, 10^6)
testimmut(FMA, FMB, 1)
println("FixedM, allocating")
@time testimmut(FMA, FMB, 10^6)

And the results:

ImmutableArrays
elapsed time: 0.088910571 seconds (56113048 bytes allocated, 57.90% gc time)
FixedM, mutating & preallocated
elapsed time: 0.059335721 seconds (240 bytes allocated)
FixedM, allocating
elapsed time: 0.612953853 seconds (144000096 bytes allocated, 20.21% gc time)
3x3 FixedArrayI{Int64,2,(3,3),9}:
 6  7  4
 8  3  7
 7  3  3

So you can see it's very competitive if you don't have to allocate the output.

Regarding the pointer and array-of-fixedarrays, these don't work like you want them to yet. That's the main problem here. See comment by @tknopp above, which is indeed the main reason this can't go forward yet.

@SimonDanisch
Copy link
Contributor

I used your version of the function, which does the allocation differently:
f(n::Integer, A, B) = (C = A*B; for i = 1:n; C = A*B; end)

From what I read, I conclude that the biggest thing missing in ImmutableArrays is, that the size is not part of the type parameter and that it isn't SIMD accelerated.
The rest is pretty awesome :)

I'm still a big fan, of defining the fixed size arrays in an abstract way.
Like this you could just create your own fast fixed array, just by inheriting from AbstractFixedArray, which in my naive world seems to be pretty awesome for all kind of purposes (e.g. machine learning).

It seems, it is very similar to what @timholy is already doing, but just to make sure I understand things correctly, here is what I would like to have:

abstract AbstractFixedArray{T,N,SZ} <: DenseArray{T,N}
#some convenient type aliases:
typealias Vector3{T} AbstractFixedArray{T,1,(3,)}

Base.size{T,N,SZ}(x::AbstractFixedArray{T,N,SZ}) = SZ
Base.ndims{T,N,SZ}(x::AbstractFixedArray{T,N,SZ}) = N
Base.length{T,N,SZ}(x::AbstractFixedArray{T,N,SZ}) = prod(SZ)
Base.eltype{T,N,SZ}(x::AbstractFixedArray{T,N,SZ}) = T
# This should rather be implemented like getfield, to get O(1)
Base.getindex{T,N,SZ}(x::AbstractFixedArray{T,N,SZ}, key::Integer) = (key == 1 ? x.x : (key == 2 ? x.y : (key == 3 ? x.z : error("outofbounds")))) # very slow?

# Some custom functions to speed up particular arithmetic operations. 
# Which hopefully some people with SIMD knowledge will supply
.*{T <: Vector3}(a::T, b::T) = T(a[1]*b[1], a[2]*b[2], a[3]*b[3])

# Now you can just define your own fixed size array, like this:
immutable MyFixed{T} <: Vector3{T}
    x::T
    y::T
    z::T
end

It would also avoid the problem, of how to name the dimensions, as you can just name them yourself. Base then just needs to offer the most common ones, like in ImmutableArrays.
The only thing that stands in the way is a fast implementation of getindex, to eliminate the big performance difference between getindex and getfield.
Tuple arithmetic would also be just one small step away, as the accessor for a tuple has already a fast getindex implementation.

I just did a very hasty research about other languages and fixed size arrays with tight memory layout. Seems, if we do this in an elegant way, Julia could quite easily take the lead in supplying the nicest to use fixed size array =)

Or do you know a language where you could do stuff like this:
(actually, this is not a rhetorical question, as my research was rather shallow)

immutable MyFeatureVector{T} <: FixedVector4{T}
   atmosphericpressure::T
   temperature::T
   altitude::T
   latitude::T
end
data = MyFeatureVector[.........]
kmeans(data) # if kmeans is defined as kmeans{T <: Real}(data::Array{FixedVector{T, D}})
ccall(....., data) #yay, tight!

@SimonDanisch
Copy link
Contributor

Hi,
I made a very simple prototype following a similar concept as ImmutableArrays, which is intended to satisfy the following demands:

  • C-Memory alignment
  • Mutability + Immutability
  • Easily create your own fixed size array with named vector dimensions for better semantic and readability of code
  • No-copy conversion between similar fixed size arrays and arrays of fixed size arrays
  • High performance

I haven't implemented any mathematical operations yet, as I don't have time and first want to discuss this with you.
Also the implementation might be a little shady...
I'd like to implement the mathematical operators with staged functions, to use `getfield with the correct symbols, as getfield seems to be a lot faster with symbols then with an index.
Is getfield with an index just missing out on an optimization, and will be equally fast soon? Then we could just use getindex for all mathematical operations, which would be a lot cleaner.

Well here is the code:
https://gist.github.com/SimonDanisch/23f86fbb618cdfabacee

This concept introduces more semantics and fits nicely with multiple dispatch.
Like this, a visualization library can use differently named vectors and then determine the correct visualization algorithm via multiple dispatch (Matrix{RGB} vs Matrix{Vector3} ), without loosing the ability to perform vector operations on the types.
I've noticed, that a lot of people will create their own, custom vector types at some point anyways, just because they want to have the proper names for the type and named accessors for the dimensions. I hope we can work against this kind of fragmentation, by just offering this use case as an intended default use case.

@tknopp
Copy link
Contributor

tknopp commented Aug 30, 2014

@SimonDanisch: I think one issue is that your implementation is not generic enough. One wants at least sizes from 2-4 and dimension 1-2 (vector,matrix). ImmutableArrays achieves this through macros and using fields. It would be even better to have a generic FixedVector{T,N} and FixedMatrix{T,N,M} where the size of the vector/matrix is part of the parameter list. And this can currently only be implemented using tuples which are not "pointer free", i.e. the immutable will contain only pointers to the tuple values. But there is hope that this will change if I have understood @Keno correctly.

@SimonDanisch
Copy link
Contributor

Well that's what I have right? It's definitely not perfect, as you need macros to have a sensitive amount of predefined vectors/matrixes. But at least you can write functions, which take abstract types of the form FixedVector{T,N} and FixedMatrix{T,N,M}. That is the biggest difference between my approach and ImmutableArrays besides the different implementation of getindex, vcat and the conversion function.
I'm not very happy with my approach, as it has a lot of problems, but its the only one with C-Memory alignment and named fields, which I think is crucial, and can't be implemented with tuples. (Or can it?)
But tuples can be easily put into this framework by inheriting from AbstractFixedArray and can be the the generic implementation of arrays without named fields.

@tknopp
Copy link
Contributor

tknopp commented Aug 30, 2014

Ah ok that is indeed nice to have this abstract layer that allows to implement the functions in a generic way without extra macro. Currently you have, however, still the Vec3{T} constructor and not the Vec{T,N} right? although this could also handled using functions.

Tuples would not give named field axes. But I am not sure how important that is.

@SimonDanisch
Copy link
Contributor

Well with types, you have to have it like this, but at least, it gets propagated to the abstract type.
The named fields and additional naming of the types is very crucial to me. And from what I see in other packages, people tend to prefer having domain specific vector types and if they're not offered they just get implemented in some packages and fragmentation will follow. I see it like this:
we can make an abstract layer right now, with an implementation similar to ImmutableArrays but more generic, and then probably just exchange it with a better implementation at some point.
I've sketched out one use case in Meshes.jl, where it's very handy to have custom vector types:
JuliaGeometry/OldMeshes.jl#21

@tknopp
Copy link
Contributor

tknopp commented Aug 31, 2014

Hm, I think the requirement of named field access conflicts a little with making this generic. In #5857 this has not been seen as a requirement. It might be indeed good to have the abstraction layer for this reason. With #1974 one might implement everything in a single generic type. Actually this use case is very interesting for #1974.

I think your suggestion to get something in base with the right interface and later exchange the implementation with tuple-based version seems interesting. The only question is whether this could not live in a package for the moment. @Keno: Is the "tight memory tuple" something that is planned for 0.4 or is this way to early? Thanks.

@SimonDanisch
Copy link
Contributor

The good thing about an abstraction layer is, that there is no real conflict between these different concepts, as long as the abstract mathematical operations are defined properly.
I'm quite hooked up on the concept of having custom vector types, as they solve a lot of problems, that I came along lately, in a very elegant way.
I'm not entirely sure how you would override the getfield function, as you need to wrap the fixed size array in a different type, right?

immutable BenchmarkResult{T} <: FixedVector{T,3}
   time::T
   gctime::T
   memory::T
end
# vs:
immutable BenchmarkResult{T} <: FixedVector{T,3}
   data::NTuple{3,T}
end
Base.getfield(x::BenchmarkResult, f::Field{:time} ) = x.data[1]
Base.getfield(x::BenchmarkResult, f::Field{:gctime} ) = x.data[2]
Base.getfield(x::BenchmarkResult, f::Field{:memory} )  = x.data[3]

benchresults = BenchmarkResult[...]
#some statistics
mean(benchresults)
...

@timholy
Copy link
Member Author

timholy commented Sep 5, 2014

Sorry this took me so long to get to.

The idea of having a generic abstract wrapper, backed by specific instance types that the user doesn't actually have to know about or use, is actually quite interesting. To, me, perhaps the most interesting issue would be: could you cleanly autogenerate types to arbitrary sizes? Clang has to do this to match C structs: see examples here, quite amazing the lengths that it has to go to in order to construct objects with a char[256] buffer!

If that could happen behind the scenes in a dynamic way, it seems pretty attractive in that it uses technologies we have now.

#7941 contains lists of things we don't have now, but would certainly make this easier.

@timholy
Copy link
Member Author

timholy commented Sep 5, 2014

Oh, and over here I have a user who would like an immutable with 200,000 fields, please! 😄 We're struggling to find the best way to handle this here. When objects get to the size of 200k, I wonder if immutable types start seeming less attractive.

@tknopp
Copy link
Contributor

tknopp commented Sep 5, 2014

Here is my take:

  • Introducing an abstract wrapper makes sense. This allows to split the algorithmic from the structural part. I am not sure if generic algorithms on AbstractArrays would not suffice though.
  • Autogenerating some types makes also sense. The most interesting bit is whether its possible to define a generic constructor. If yes one could later replace the generated types with a tuple based type without breaking the API

But actually it would also be very nice to have some syntax to construct fixed size arrays. If we would define arithmetics on tuples this would be one way (although it would be restricted to 1D arrays)

@timholy
Copy link
Member Author

timholy commented Sep 19, 2014

You can rename types with typealias. For example, typealias Vec3{T} FixedSizeVector{T,3}.

@timholy
Copy link
Member Author

timholy commented Sep 19, 2014

But with typealias you can't dispatch differently using different names. Perhaps that's what you were trying to achieve?

@SimonDanisch
Copy link
Contributor

Yes! Sorry for the confusion, I shouldn't have called it renaming!

@timholy
Copy link
Member Author

timholy commented Sep 19, 2014

Sorry to continue the conversation with myself here, but if you use your idea of generating different concrete instantiations with one abstract wrapper type, won't you get this for free? As I understand it, your suggestion is that a FixedSizeArray would be an abstract type. The instance for a 3-vector would be declared as

typealias FixedSizeVector{T,L} FixedSizeArray{T,1,(L,)}
immutable Vec3{T} <: FixedSizeVector{T,3}
    x::T
    y::T
    z::T
end

so you could use either FixedSizeVector{T,3} or Vec3 directly, and also access named fields.

@SimonDanisch
Copy link
Contributor

You can for example implement a generic, differently named vector. Also the more esoteric field accessors can't be implemented with an immutable type. So what I proposed should work better together with a generic implementation, and gives you more options.
Here an example of a custom type, which could profit from the abstract API, even for matrix multiplication, where the return type is different from the input types, e.g. Matrix3x4 * Matrix4x2

typealias  FixedSizeVector{T,D} AbstractFixedSizeArray{T, (D,), 1}
typealias  FixedSizeMatrix{T, RD, CD} AbstractFixedSizeArray{T, (RD, CD), 2}

immutable Vec{T, D} <: FixedSizeVector{T,D}
::FixedSizeArray{T, (D,), 1}
end
immutable Mat{T, RD, CD} <: FixedSizeMatrix{T, RD, CD}
::FixedSizeArray{T, (RD, CD), 2}
end
#Assuming you have the right constructors, this would be possible:
(*)(a::FixedMatrix{T, N, M}, b::FixedMatrix{M, P})
BaseType =  typeof(a) # Maybe assert, that a and b are of the same base type
....
return BaseType{T, N,P}(result)
end

@SimonDanisch
Copy link
Contributor

This is actually also a very clean signature for matrix multiplication, as it exactly describes what's happening:
(a::FixedMatrix{T, N, M}, b::FixedMatrix{T, M, P}) -> FixedMatrix{T, N, P}
Which also means, that in the body of this function, you don't need to check the dimensions and throw an error, as the check is already performed by the type signature.
This plays well with the philosophy of writing methods, that do exactly what is advertised in the signature.
I'm really not the one to speak here, but maybe this signature is so nice, that it would make sense to do something like this:

immutable FixedArrayWrapper{T, D, N} <: AbstractFixedArray{T, D, N}
data::Array{T, N}
end
function (*)(a::Matrix{T}, b::Matrix{S}) 
 return (FixedArrayWrapper(a) * FixedArrayWrapper(b)).data
end

function (*)(a::FixedMatrix{T, N, M}, b::FixedMatrix{T, M, P})
 # code for multiplication of matrices with well suited dimensions
end
function (*)(a::FixedMatrix, b::FixedMatrix)
# code for multiplication of matrices with arbitrary dimensions
# which usually throws an error
end

@timholy
Copy link
Member Author

timholy commented Sep 19, 2014

I would mostly say, go for it and see how it works out.

@SimonDanisch
Copy link
Contributor

Well CartesianIndex seems to be a perfect example of a type, that would profit from the infrastructure I have in mind.
If thinks work out, this line would be enough to implement CartesianIndex:

immutable CartesianIndex{N} <: NTuple{N, Int}

Implementation wise, I went down a pretty similar road like CartesianIndex for my newest iteration, so in principle it shouldn't be hard too generalize this and put things under the same interface.
But after talking with @vchuravy, it seems a better solution to polish NTuple for this purpose.
For OpenCL PTX, we really need to have LLVM emit code, that uses as an underlying type.
So a type that cleanly wraps LLVM's vector type would be the way to go.
If it turns out, that it's not hard to give NTuple a canonical binary representation, or @Keno is nearly done implementing it, it shouldn't be that hard to add the rest of the implementation (and then revamp CartesianIndex).
If @Keno is the only one who can do this, and he doesn't have time to do it for the next couple of months, I'd reconsider the runtime creation of immutables approach ;)

@SimonDanisch
Copy link
Contributor

I guess @tknopp is right with saying, that we have everything needed to implement the abstract interface (at #9821)
So I made a minimal implementation, which allows you to do this:

abstract RGB{T} <: FixedSizeVector{T, 3}
immutable Red{T} <: Dimension{T}
    val::T
end
immutable Green{T} <: Dimension{T}
    val::T
end
immutable Blue{T} <: Dimension{T}
    val::T
end
@accessors RGB (Red => 1, Green => 2, Blue => 3)

@show a = RGB(0.1f0,0.1f0,0.3f0) # FSRGB{Float32}(0.1f0,0.1f0,0.3f0)
@show a[Green] # Green{Float32}(0.1f0)
@show a[2] # 0.1f0
@show a + a # FSRGB{Float32}(0.2f0,0.2f0,0.6f0)

Implementation: https://gist.github.com/SimonDanisch/2855688b465ac9143b21#file-fsarray-jl
If this is thought to go into the right direction, I can invest some more time to implement the full functionality.

@SimonDanisch
Copy link
Contributor

two things are wrong with this:
RGB is abstract, which means you shouldn't use it for arrays or field types. Guess I got too excited by having RGB defined in one line, even though this would basically work the same:

immutable RGB{T} <: FixedSizeVector{T, 3}
r::T
g::T
b::T
end

There was a second shortcoming with which I came up with yesterday night, but I seem to have forgotten it today :D

@SimonDanisch
Copy link
Contributor

Okay, clearer argument, of why it's not that easy to cleanly switch out the immutable implementation with NTuples at some point.

I've two demands for the current implemenation:

  1. People don't need to change their code anymore with later iterations (basically fixing the interface right now)
  2. At some point I want to have LLVM code emitted for all FixedSizeVectors, that can also be used for emitting the correct SPIR code. From what I know, this means FixedSizeVectors should be represented as <N x T> in LLVM.

So if people start to have this code in their package:

immutable RGB{T} <: FixedSizeVector{T, 3}
r::T
g::T
b::T
end

From what I know, this would mean they wouldn't be able to have their code translate to valid SPIR.
So either, we need to go with:

1

#Without NTuple
immutable RGB{T} <: FixedSizeVector{T, 3}
data::GenericFixedVector3{T}
end
#When NTuple replaces the generic fixed size vector
immutable RGB{T} <: FixedSizeVector{T, 3}
data::NTuple{3, T}
end

Which means the currently developed Api needs to work with this kind of wrapper implementation, which is not too hard.
Other solution is still the one proposed in #9821.

2

At some point, we restructure immutables SomeName{T} i1::T; i2::T; ... end to all look in the end like this: immutable SomeName{T} data::NTuple{N,T} end and automatically define the correct getfields/setfields.
Relatively large redefinition of the type system?!

3

Probably the best way to go for now is, to use a macro for the type creation, which can silently change the type creation process, while the FixedSizeVector implementation evolves.

@FixedSizeVector begin Name{T} # For a Vector only defined on one N like RGB
field1::T
field2::T
end
@FixedSizeVector Name{T, N} # For variable Size vectors like Vector/1/2/3/4

@Jutho
Copy link
Contributor

Jutho commented Jan 19, 2015

In the case of option 3, I guess the macro could also pick up on @FixedSizeVector Name{T,2} in the first case, so that neither case has to be defined explicitly.

Proposal 3 is very close in spirit the generic request of a "staged type" implementation #8472 , which could for now be implemented using a macro that creates an abstract type (e.g. FixedSizeVector{N,T}) and then dynamically generates concrete types (e.g. FixedSizeVector_5{T}, only truly concrete for a given type of T) as they are requested (like was done in the CartesianIndex case and in your gist above), but at some point in the future gets replaced by a built-in implementation such that FixedSizeVector{5,Float64} already represents the actual concrete bits type and the dynamically generated versions such as FixedSizeVector_5{Float64} are no longer needed.

@SimonDanisch
Copy link
Contributor

By the way, mutability is still not supported. I got a little bit lost on the progress on this. I just now it should be theoretically possible to have mutability + dense memory layout. Is it possible while keeping stack allocation, though? I would think it is, as long as the pointer gets destroyed together with the immutable?!

That's the implementation I would go for right now! Its not nice but does it's job I suppose.
I might wrap the generic type inside a wrapper, so that we need to define the different cardinalities only one time and it would also be easier to replace it with NTuples (So a mixture of 3, 1 and probably the custom immutable inheriting from FSV).

So, we could settle for something along these lines for now:

@FixedSizeVector begin Name{T} # For a Vector only defined on one N like RGB
field1::T
field2::T
end
# -> yields a custom immutable with exactly these fields
@FixedSizeVector Name{T,2} # -> uses the generic fsvectors
@FixedSizeVector Name{T, N} # -> abstract + generic fsvectors

I'm still hoping, that someone jumps in with something more elegant ;)

Would be nice if all involved people could state their agreement/disagreement, so it's not just hanging in the air!
I'm especially interested in @Keno's opinion, as it seems that he has been in charge for the fixed size implementation closer to core, so far.
Simple reason: I don't want to waste my work time, if a few weeks later it turns out that this all can be solved easily by a new core feature, that I've missed out on ;)

@eschnett
Copy link
Contributor

This is a high-level comment that may be out of place: Do you really need mutability? To update one element, you could create a new vector from the old vector where this element has been replaced (a merge operation). Then you assign this new vector to the old variable. LLVM should optimize this. I don't think pointers should factor into this.

@Jutho
Copy link
Contributor

Jutho commented Jan 19, 2015

Trying to write some code with CartesianIndex, I also got to the conclusion that they (and by extension any immutable FixedSizeVector implementation) could benefit from a non-mutating version of most of these methods .

@SimonDanisch
Copy link
Contributor

I know, this works pretty well, and I've done well with immutables so far. I have two arguments for mutability, though:
People will want it at some point, and it will be implemented I'm sure. As this is foreseeable, I don't want to have the API split at this point ;)
The biggest use for me are transformation matrices. If you have a lot of them, changing and transforming them every frame(roughly every ~8-16ms), might become a gamble of whether llvm can optimize everything.
Besides, it's quite painful to work with immutable 4x4 matrices, just from a syntactical viewpoint.
Maybe, if we can be sure that LLVM optimizes this away, we can define the set field/index operations and just return a new FS Matrix/Vector instead of mutating it?

SimonDanisch added a commit to SimonDanisch/FixedSizeArrays.jl that referenced this pull request Jan 31, 2015
@SimonDanisch
Copy link
Contributor

I'm slowly getting somewhere:
https://github.com/SimonDanisch/FixedSizeArrays.jl

  • Core Array
    • basic array interface
    • Inherit from DenseArray (a lot of warnings is caused by this)
  • Indexing:
    • multidimensional access
    • colon access for matrices
    • multidimensional colon access
    • setindex!
    • setindex!/getindex for arrays of FSA (e.g. easy acces to single fields)
    • access via dimension type (Red -> redchannel) (prototype in place though)
  • Constructor
    • generic constructor for arbitrary Nvectors
    • fast constructor for arbitrary types (slow inside other staged functions, slow due to varargs!?)
    • different constructors for ease of use (zero, eye, from other FSAs, etc...)
  • Functions
    • all kinds of unary/binary operators
    • matrix multiplication (speed issue with generic constructors. Issue with staged function inside staged function?!)
    • matrix functions (inv, transpose, etc...)
  • FSA Wrapper
    • Abstract Wrapper type (for types that wrap other FSAs)
    • Indexing
    • Map/Reduce (-> so no other functions yet)

@SimonDanisch
Copy link
Contributor

By the way, my fixedsizearray implementation is slowly becoming functional!
Building on top of it I created ColorTypes and GeometryTypes. GeometryTypes passes ~90% of the ImmutableArray tests and is mostly comparable in speed.
I hope that we can integrate some optimized functions over time. The matrix multiplication, even though that it's unrolled and ~ 3 times faster than julia array matrix multiplication (for small matrices), seems to emit sub optimal native code... So maybe this can become a playground for some optimized vector operations.
I hope this will smoothly integrate with the upcoming tuple redesign.
Any feedback and suggestions are greatly appreciated!

Best,
Simon

@timholy
Copy link
Member Author

timholy commented Mar 26, 2015

That's exciting news!

@mschauer
Copy link
Contributor

Yes! Do I understood correctly that "smoothly" would mean that soon something like reinterpret(Vector{FixedSizeVector{Float64, 3}}, zeros(N,3)) will work?

@SimonDanisch
Copy link
Contributor

Indirectly, yes. This already works with the current implementation( it
doesn't rely on tuples yet). The question will be how the tuples will
behave, when they're wrapped inside an immutable.
There are some open questions regarding this.
For example, it seems right now that tuple(1,2,3) is way cheaper than
SomeVec3((1,2,3)). Also mutability will be questionable ( which is needed
for getting rid of staged functions and for conveniences when handling
matrices)
On 26 Mar 2015 19:30, "M. Schauer" notifications@github.com wrote:

Yes! Do I understood correctly that "smoothly" would mean that soon
something like reinterpret(Array{FixedSizeVector{T, 3}}), zeros(N,3))
will work?


Reply to this email directly or view it on GitHub
#7568 (comment).

@mschauer
Copy link
Contributor

mschauer commented Sep 2, 2015

Yay!

julia> reinterpret(Point{2,Int}, [1,1,2,2])
2-element Array{FixedSizeArrays.Point{2,Int64},1}:
 FixedSizeArrays.Point{2,Int64}((1,1))
 FixedSizeArrays.Point{2,Int64}((2,2))

Looking forward for an update!

@timholy
Copy link
Member Author

timholy commented Jan 2, 2016

Superseded by the FixedSizeArrays package.

@timholy timholy closed this Jan 2, 2016
@timholy timholy deleted the fixedarrays branch January 2, 2016 16:02
@JaredCrean2
Copy link
Contributor

Is there a way to post a link to an issue/PR without creating a github reference? Creating the reference above was accidental.

@hayd
Copy link
Member

hayd commented May 6, 2016

@JaredCrean2 I wouldn't worry about it. But presumably you could use a url shortner, I suspect that github doesn't expand. test: https://goo.gl/2EI6e2

@JaredCrean2
Copy link
Contributor

Success, no reference created. Thanks.

@ViralBShah ViralBShah added the gpu Affects running Julia on a GPU label Sep 7, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
gpu Affects running Julia on a GPU
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants